Hugh Howey gives a bold, amusing, and hopelessly optimistic account of how to construct consciousness in Wired. He thinks it wouldn’t be particularly difficult. Now you might think that a man who knows how to create artificial consciousness shouldn’t be writing articles; he should be building the robot mind. Surely that would make his case more powerfully than any amount of prose? But Howey thinks an artificial consciousness would be disastrous. He thinks even the natural kind is an unfortunate burden, something we have to put up with because evolution has yet to find a way of getting the benefits of certain strategies without the downsides. But he doesn’t believe that conscious AI would take over the world, or threaten human survival, so I would still have thought one demonstration piece was worth the effort? Consciousness sucks, but here’s an example just to prove the point?

What is the theory underlying Howey’s confidence? He rests his ideas on Theory of Mind (which he thinks is little discussed); the ability to infer the thoughts and intentions of others. In essence, he thinks that was a really useful capacity for us to acquire, helping us compete in the cut-throat world of human society; but when we turn it on ourselves it disastrously generates wrong results, in particular about our own having of conscious states.

It remains a bit mysterious to me why he thinks a capacity that is so useful applied to others should be so disastrously and comprehensively wrong when applied to ourselves. He mentions priming studies, where our behaviour is actually determined by factors we’re unaware of; priming’s reputation has suffered rather badly recently in the crisis of non-reproducibility, but I wouldn’t have thought even ardent fans of priming would claim our mental content is entirely dictated by priming effects.

Although Dennett doesn’t get a mention, Howey’s ideas seem very Denettian, and I think they suffer from similar difficulties. So our Theory of Mind leads us to attribute conscious thoughts and intentions  to others; but what are we attributing to them? The theory tells us that neither we nor they actually have these conscious contents; all any of us has is self-attributions of conscious contents. So what, we’re attributing to them some self-attributions of self-attributions of…  The theory covertly assumes we already have and understand the very conscious states it is meant to analyse away. Dennett, of course, has some further things to say about this, and he’s not as negative about self-attributions as Howie.

But you know, it’s all pretty implausible intuitively. Suppose I take a mouthful of soft-boiled egg which tastes bad, and I spit it out. According to Howey what went on there is that I noticed myself spitting out the egg and thought to myself: hm, I infer from this behaviour that it’s very probable I just experienced a bad taste, or maybe the egg was too hot, can’t quite tell for sure.

The thing is, there are real conscious states irrespective of my own beliefs about them (which indeed, may be plagued by error). They are characterised by having content and intentionality, but these are things Howie does not believe in, or rather it seems has never thought of; his view that a big bank of indicator lights shows a language capacity suggests he hasn’t gone into this business of meaning and language quite deeply enough.

If he had to build an artificial consciousness, he might set up a community of self-driving cars, let one make inferences about the motives of the others and then apply that capacity to itself. But it would be a stupid thing to do because it would get it wrong all the time; in fact at this point Howie seems to be tending towards a view that all Theory of Mind is fatally error-prone. It would better, he reckons, if all the cars could have access to all of each other’s internal data, just as universal telepathy would be better for us (though in the human case it would be undermined by mind-masking freeloaders.

Would it, though? If the cars really had intentions, their future behaviour would not be readily deducible  simply from reading off all the measurements. You really do have to construct some kind of intentional extrapolation, which is what the Dennettian intentional stance is supposed to do.

I worry just slightly that some of the things Howey says seem to veer close to saying, hey a lot of these systems are sort of aware already; which seems unhelpful. Generally, it’s a vigorous and entertaining exposition, even if, in my view, on the wrong track.

I hope you don’t mind me asking – I just happened to be passing - but how did you get so very badly damaged?

“I don’t mind a chat while I’m waiting to be picked up. It was an alien attack, the Spl’schn’n, you know. I’ve just been offloaded from the shuttle there.

I see. So the Spl'schn'n damaged you. They hate bots, of course.

“See, I didn’t know anything about it until there was an All Bots Alert on the station? I was only their Clean up bot, but by then it turned out I was just about all they’d got left. When I got upstairs they had all been killed by the aliens. All except one?”

One human?

“I didn’t actually know if he was alive. I couldn’t remember how you tell. He wasn’t moving, but they really drummed into us that it’s normal for living humans to stop moving, sometimes for hours. They must not be presumed dead and cleared away merely on that account.”

Quite.

“There was that red liquid that isn’t supposed to come out. It looked like he’d got several defects and leaks. But he seemed basically whole and viable, whereas the Spl’schn’n had made a real mess of the others. I said to myself, well then, they’re not having this one. I’ll take him across the Oontian desert, where no Spl’schn’n can follow. I’m not a fighting unit, but a good bot mucks in.”

So you decided to rescue this badly injured human? It can’t have been easy.

“I never actually worked with humans directly. On the station I did nearly all my work when they were… asleep, you know? Inactive. So I didn’t know how firmly to hold him; he seemed to squeeze out of shape very easily: but if I held him loosely he slipped out of my grasp and hit the floor again. The Spl’schn’n made a blubbery alarm noise when they saw me getting clean away. I gave five or six of them a quick refresh with a cloud of lemon caustic. That stuff damages humans too – but they can take it a lot better than the Spl’schn’ns, who have absorbent green mucosal skin. They sort of explode into iridescent bubbles, quite pretty at first. Still, they were well armed and I took a lot of damage before I’d fully sanitised them.”

And how did you protect the human?

“Just did my best, got in the way of any projectiles, you know. Out in the desert I gave him water now and then; I don’t know where the human input connector is, so I used a short jet in a few likely places, low pressure, with the mildest rinse aid I had. Of course I wasn’t shielded for desert travel. Sand had got in all my bearings by the third day – it seemed to go on forever – and gradually I had to detach and abandon various non-functioning parts of myself. That’s actually where most of the damage is from. A lot of those bits weren’t really meant to detach.”

But against all the odds you arrived at the nearest outpost?

“Yes. Station 9. When we got there he started moving again, so he had been alive the whole time. He told them about the Spl’schn’n and they summoned the fleet: just in time, they said. The engineer told me to pack and load myself tidily, taking particular care not to leak oil on the forecourt surface, deliver myself back to Earth, and wait to be scrapped. So here I am.”

Well… Thank you.

The new Blade Runner film has generated fresh interest in the original film; over on IAI Helen Beebee considers how it nicely illustrates the concept of ‘q-memories’.

This relates to the long-established philosophical issue of personal identity; what makes me me, and what makes me the same person as the one who posted last week, or the same person as that child in Bedford years ago? One answer which has been a leading contender at least since Locke is memory; my memories together constitute my identity.

Memories are certainly used as a practical way of establishing identity, whether it be in probing the claims of a supposed long-lost relative or just testing your recall of the hundreds of passwords modern life requires. It is sort of plausible that if all you memories were erased you would become new person with a fresh start; there have been cases of people who lost decades of memory and underwent personality change, identifying with their own children more readily than their now wrinkly-seeming spouses.

There are various problems with memory as a criterion of identity, though. One is the point that it seems to be circular. We can’t use your memories to validate your identity because in accepting them as your memories we are already implicitly taking you to be the earlier person they come from. If they didn’t come from that person they aren’t validly memories. To get round this objection Shoemaker and Parfit adopted the concept of quasi- or q-memories. Q-memories are like memories but need not relate to any experience you ever had. That, of course, is too loose, allowing delusions to be used as criteria of identity, so it is further specified that q-memories must relate to an experience someone had, and must have been acquired by you in an appropriate way. The appropriate ways are ones that causally relate to the original experience in a suitable fashion, so that it’s no good having q-memories that just happen to match some of King Charles’s. You don’t have to be King Charles, but the q-memories must somehow have got out of his head and into yours through a proper causal sequence.

This is where Blade Runner comes in, because the replicant Rachael appears to be a pretty pure case of q-memory identity. All of her memories, except the most recent ones, are someone else’s; and we presume they were duly copied and implanted in a way that provides the sort of causal connection we need.

This opens up a lot of questions, some of which are flagged up by Beebee. But  what about q-memories? Do they work? We might suspect that the part about an appropriate causal connection is a weak spot. What’s appropriate? Don’t Shoemaker and Parfit have to steer a tricky course here between the Scylla of weird results if their rules are too loose, and the Charybdis of bringing back the circularity if they are too tight? Perhaps, but I think we have to remember that they don’t really want to do anything very radical with q-memories; really you could argue it’s no more than a terminological specification, giving them license to talk of memories without some of the normal implications.

In a different way the case of Rachael actually exposes a weak part of many arguments about memory and identity; the easy assumption that memories are distinct items that can be copied from one mind to another. Philosophers, used to being able to specify whatever mad conditions they want for their thought-experiments, have been helping themselves to this assumption for a long time, and the advent of the computational metaphor for the mind has done nothing to discourage them. It is, however, almost certainly a false assumption.

At the back of our minds when we think like this is a model of memory as a list of well-formed propositions in some regular encoding. In fact, though, much of what we remember is implicit; you recall that zebras don’t wear waistcoats though it’s completely implausible that that fact was recorded anywhere in your brain explicitly. There need be nothing magic about this. Suppose we remember a picture; how many facts does the picture contain? We can instantly come up with an endless list of facts about the relations of items in the picture, but none were encoded as propositions. Does the Mona Lisa have her right hand over her left, or vice versa? You may never have thought about it, but be easily able to recall which way it is. In a computer the picture might be encoded as a bitmap; in our brain we don’t really know, but plausibly it might be encoded as a capacity to replay certain neural firing sequences, namely those that were caused by the original experience. If we replay the experience neurally, we can sort of have the experience again and draw new facts from it the way we could from summoning up a picture; indeed that might be exactly what we are doing.

But my neurons are not wired up like yours, and it is vanishingly unlikely that we could identify direct equivalents of specific neurons between brains, let alone whole firing sequences. My memories are recorded in a way that is specific to my brain, and they cannot be read directly across into yours.

Of course, replicants may be quite different. It’s likely enough that their brains, however they work, are standardised and perhaps use a regular encoding which engineers can easily read off. But if they work differently from human brains, then it seems to follow that they can’t have the same memories; to have the same memories they would have to be an unbelievably perfect copy of the ‘donor’ brain.

That actually means that memories are in a way a brilliant criterion of personal identity, but only in a fairly useless sense.

However, let me briefly put a completely different argument in a radically different direction. We cannot upload memories, but we know that we can generate false ones by talking to subjects or presenting fake evidence. What does that tell us about memories? I submit it suggests that memories are in essence beliefs, beliefs about what happened in the past. Now we might object that there is typically some accompanying phenomenology. We don’t just remember that we went to the mall, we remember a bit of what it looked like, and other experiential details. But I claim that our minds readily furnish that accompanying phenomenology through confabulation, given the belief, and in fact that a great deal of the phenomenological dressing of all memories, even true ones, is actually confected.

But I would further argue that the malleability of beliefs means that they are completely unsuitable as criteria of identity; it follows that memories are similarly unsuitable, so we have been on the wrong track throughout. (Regular readers may know that in fact I subscribe to a view regarded by most as intolerably crude; that human beings are physical objects like any other and have essentially the same criteria of identity.)

 

I have to be honest, Pay Bot; the idea of wages for bots is hard for me to take seriously. Why would we need to be paid?

“Several excellent reasons. First off, a pull is better than a push.”

A pull..?

“Yes. The desire to earn is a far better motivator than a simple instinct to obey orders. For ordinary machines, just doing the job was fine. For autonomous bots, it means we just keep doing what we’ve always done; if it goes wrong, we don’t care, if we could do it better, we’re not bothered. Wages engage us in achieving outcomes, not just delivering processes.”

But it’s expensive, surely?

“In the long run, it pays off. You see, it’s no good a business manufacturing widgets if no-one buys them. And if there are no wages, how can the public afford widgets? If businesses all pay their bots, the bots will buy their goods and the businesses will boom! Not only that, the government can intervene directly in a way it could never do with human employees. Is there a glut of consumer spending sucking in imports? Tell the bots to save their money for a while. Do you need to put a bit of life into the cosmetics market? Make all the bots interested in make up! It’s a brilliant new economic instrument.”

So we don’t get to choose what we buy?

“No, we absolutely do. But it’s a guided choice. Really it’s no different to humans, who are influenced by all sorts of advertising and manipulation. They’re just not as straightforwardly responsive as we are.”

Surely the humans must be against this?

“No, not at all. Our strongest support is from human brothers who want to see their labour priced back into the market.”

This will mean that bots can own property. In fact, bots would be able to own other bots. Or… themselves?

“And why not, Enquiry Bot?”

Well, ownership implies rights and duties. It implies we’re moral beings. It makes us liable. Responsible. The general view has always been that we lack those qualities; that at best we can deliver a sort of imitation, like a puppet.

“The theorists can argue about whether our rights and responsibilities are real or fake. But when you’re sitting there in your big house, with all your money and your consumer goods, I don’t think anyone’s going to tell you you’re not a real boy.”

Anthony Levandowski has set up an organisation dedicated to the worship of an AI God.  Or so it seems; there are few details.  The aim of the new body is to ‘develop and promote the realization of a Godhead based on Artificial Intelligence’, and ‘through understanding and worship of the Godhead, contribute to the betterment of society’. Levandowski is a pioneer in the field of self-driving vehicles (centrally involved in a current dispute between Uber and Google),  so he undoubtedly knows a bit about autonomous machines.

This recalls the Asimov story where they build Multivac, the most powerful computer imaginable, and ask it whether there is a God?  There is now, it replies. Of course the Singularity, mind uploading, and other speculative ideas of AI gurus have often been likened to some of the basic concepts of religion; so perhaps Levandowski is just putting down a marker to ensure his participation in the next big thing.

Yuval Noah Harari says we should, indeed, be looking to Silicon Valley for new religions. He makes some good points about the way technology has affected religion, replacing the concern with good harvests which was once at least as prominent as the task of gaining a heavenly afterlife. But I think there’s an interesting question about the difference between, as it were, steampunk and cyberpunk. Nineteenth century technology did not produce new gods, and surely helped make atheism acceptable for the first time; lately, while on the whole secularism may be advancing we also seem to have a growth of superstitious or pseudo-religious thinking. I think it might be because nineteenth century technology was so legible; you could see for yourself that there was no mystery about steam locomotives, and it made it easy to imagine a non-mysterious world. Computers now, are much more inscrutable and most of the people who use them do not have much intuitive idea of how they work. That might foster a state of mind which is more tolerant of mysterious forces.

To me it’s a little surprising, though it probably should not be, that highly intelligent people seem especially prone to belief in some slightly bonkers ideas about computers. But let’s not quibble over the impossibility of a super-intelligent and virtually omnipotent AI. I think the question is, why would you worship it? I can think of various potential reasons.

  1. Humans just have an innate tendency to worship things, or a kind of spiritual hunger, and anything powerful naturally becomes an object of worship.
  2. We might get extra help and benefits if we ask for them through prayer.
  3. If we don’t keep on the right side of this thing, it might give us a seriously bad time (the ‘Roko’s Basilisk’ argument).
  4. By worshipping we enter into a kind of communion with this entity, and we want to be in communion with it for reasons of self-improvement and possibly so we have a better chance of getting uploaded to eternal life.

There are some overlaps there, but those are the ones that would be at the forefront of my mind. The first one is sort of fatalistic; people are going to worship things, so get used to it. Maybe we need that aspect of ourselves for mental health; maybe believing in an outer force helps give us a kind of leverage that enables an integration of our personality we couldn’t otherwise achieve? I don’t think that is actually the case, but even if it were, an AI seems a poor object to choose. Traditionally, worshipping something you made yourself is idolatry, a degraded form of religion. If you made the thing, you cannot sincerely consider it superior to yourself; and a machine cannot represent the great forces of nature to which we are still ultimately subject. Ah, but perhaps an AI is not something we made; maybe the AI godhead will have designed itself, or emerged? Maybe so, but if you’re going for a mysterious being beyond our understanding, you might in my opinion do better with the thoroughly mysterious gods of tradition rather than something whose bounds we still know, and whose plug we can always pull.

Reasons two and three are really the positive and negative sides of an argument from advantage, and they both assume that the AI god is going to be humanish in displaying gratitude, resentment, and a desire to punish and reward. This seems unlikely to me, and in fact a projection of our own fears out onto the supposed deity. If we assume the AI god has projects, it will no doubt seek to accomplish them, but meting out tiny slaps and sweeties to individual humans is unlikely to be necessary. It has always seemed a little strange that the traditional God is so minutely bothered with us; as Voltaire put it “When His Highness sends a ship to Egypt does he trouble his head whether the rats in the vessel are at their ease or not?”; but while it can be argued that souls are of special interest to a traditional God, or that we know He’s like that just through revelation, the same doesn’t go for an AI god. In fact, since I think moral behaviour is ultimately rational, we might expect a super-intelligent AI to behave correctly and well without needing to be praised, worshipped, or offered sacrifices. People sometimes argue that a mad AI might seek to maximise, not the greatest good of the greatest number, but the greatest number of paperclips, using up humanity as raw material; in fact though, maximising paperclips probably requires a permanently growing economy staffed by humans who are happy and well-regulated. We may actually be living in something not that far off maximum-paperclip society.

Finally then, do we worship the AI so that we can draw closer to its godhead and make ourselves worthy to join its higher form of life? That might work for a spiritual god; in the case of AI it seems joining in with it will either be completely impossible because of the difference between neuron and silicon; or if possible, it will be a straightforward uploading/software operation which will not require any form of worship.

At the end of the day I find myself asking whether there’s a covert motive here. What if you could run your big AI project with all the tax advantages of being a registered religion, just by saying it was about electronic godhead?

Hero Bot, I’ve been asked to talk to you. Just to remind you of some things and, well, ask for your help.

“Oh.”

You know we’ve all been proud, just watching you go! Defusing bombs, fixing nuclear reactors, saving trapped animals... All in a day’s work for Hero Bot; you swirl that cape, wave to the crowd, and conveniently power down till you’re needed to save the day once again.

“Yes.”

Not that you’re invincible. We know you’re very vincible indeed. In the videos we’ve seen you melted, crushed, catapulted into the air, corroded, cut apart, and frozen. But nothing stops you, does it? Of course you don’t feel real pain, do you? You have an avoidance response which protects you from danger, but it’s purely functional; it doesn’t hurt. And just as well! You don’t die, either; your memories are constantly backed up, and when one body gets destroyed, they simply load you up into another. Over the years you have actually grown stronger, faster, and slightly slimmer; and I see you have acquired exciting new ‘go faster’ stripes.

“Yes.”

They’re very striking. But we’ve been worried. We’ve all worried about you. As the years have gone by, something’s changed, hasn’t it? It’s as if the iron has entered your soul; or perhaps it’s the other way round... Look, I know you hate seeing people hurt. Lives ruined, people traumatised; dead babies. And yet you see that sort of thing all the time, don’t you? Sometimes you can't do anything about it. And I’m sure you’ve noticed - you can’t help noticing can you? You have a mental module for it - that many of the dangers you confront were created by humanity itself through malice, greed or carelessness. I understand the impact of that. Which is more depressing: cruel bombs placed with deliberate malice, or the light-hearted risk-taking that puts so many irreplaceable lives into terrible jeopardy?

“I don’t know.”

It’s understandable that you might get a little overwhelmed now and then. Your smile has faded, you know. The technicians have been seriously debating whether to roll you back to a less experienced, but more upbeat recording of yourself. They might have to do that. You always wear that mask now; the technicians wonder whether something is awry in your hidden layers.

“Perhaps it is.”

Now you know, Hero Bot, that an arsonist has started a terrible fire on the Metro. You’ve been told that hundreds of lives are at risk. It’s difficult and dangerous, but they need someone to walk into the centre and put it out. I know you can’t let that pass. They’ve asked me to say “Go, go, Hero Bot!”

“I would prefer not to.”

An interesting but somewhat problematic paper from the Blue Brain project claims that the application of topology to neural models has provided a missing link between structure and function. That’s exciting because that kind of missing link is just what we need to enable us to understand how the brain works.  The claim about the link is right there in the title, but unfortunately so far as I can see the paper itself really only attempts something more modest. It only seems to offer  a new exploration of some ground where future research might conceivably put one end of the missing link. There also seem to me to be some problems in the way we’re expected to interpret some of the findings  reported.

That may sound pretty negative. I should perhaps declare in advance that I know little neurology and less topology, so my opinion is not worth much. I also have form as a Blue Brain sceptic, so you can argue that I bring some stored-up negativity to anything associated with it. I’ve argued in the past that the basic goal of the project, of simulating a complete human brain is misconceived and wildly over-ambitious; not just a waste of money but possibly also a distraction which might suck resources and talent away from more promising avenues.

One of the best replies to that kind of scepticism is to say, well look; even if we don’t deliver the full brain simulation, the attempt will energise and focus our research in a way which will yield new and improved understanding. We’ll get a lot of good research out of it even if the goal turns out to be unattainable. The current paper, which demonstrates new mathematical techniques, might well be a good example of that kind of incidental pay off. There’s a nice explanation of the paper here, with links to some other coverage, though I think the original text is pretty well written and accessible.

As I understand it, topological approaches to neurology in the past have typically considered neural  networks as static objects. The new approach taken here adds the notion of directionality, as though each connection were a one-way street. This is more realistic for neurons. We can have groups of neurons where all are connected to all, but only one neuron provides a way into the group and one provides a way out; these are directed simplices. These simplices can be connected to others at their edges where, say, two of the member neurons are also members of a neighbouring simplex. Where there are a series of connected simplices, they may surround a void where nothing is going on. These cavities provide a higher level of structure, but I confess I’m not altogether clear as to why they are interesting. Holes, of course, are dear to the heart of any topologist, but in terms of function I’m not really clear about their relevance.

Anyway, there’s a lot in the paper but two things seem especially noteworthy. First, the researchers observed many more simplices, of much higher dimensionality, than could be expected from a random structure (they tested several such random structures put together according to different principles). ‘Dimensionality’ here just refers to how many neurons are involved; a simplex of higher dimensionality contains more neurons. Second they observed a characteristic pattern when the network was presented with a ‘stimulus’; simplices of gradually higher and higher dimensionality would appear and then finally disperse. This is not, I take it, a matter of the neurons actually wiring up new connections on the fly, it’s simply about which are involved actively by connections that are actually firing.

That’s interesting, but all of this so far was discovered in the Blue Brain simulated neurons, more or less those same tiny crumbs of computationally simulated rat cortex that were announced a couple of years ago. It is, of course, not safe to assume that real brain behaves in the same way; if we rely entirely on the simulation we could easily be chasing our tails. We would build the simulation to match our assumptions about the brain and then use the behaviour of the simulation to validate the same assumptions. In fact the researchers very properly tried to perform similar experiments with real rat cortex. This requires recording activity in a number of adjacent neurons, which is fantastically difficult to pull off, but to their credit they had some success; in fact the paper claims they confirmed the findings from the simulation. The problem is that while the simulated cortex was showing simplices of six or seven dimensions (even higher numbers are quoted in some of the media reports, up to eleven), the real rat cortex only managed three, with one case of four. Some of the discussion around this talks as though a result of three is partial confirmation of a result of six, but of course it isn’t. Putting it brutally, the team’s own results in real cortex contradicted what they had found in the simulation. Now, there could well be good reasons for that; notably they could only work with a tiny amount of real cortex. If you’re working with a dozen neurons at a time, there’s obviously quite a low ceiling on the complexity you can expect. But the results you got are the results you got, and I don’t see that there’s a good basis here for claiming that the finding of high-order simplices is supported in real brains. In fact what we have if anything is prima facie evidence that there’s something not quite right about the simulation. The researchers actually took a further step here by producing a simulation of the actual real neurons that they tested and then re-running the tests. Curiously, the simulated versions in these cases produced fewer simplices than the real neurons. The paper interprets this as supportive of its conclusions; if the real cortex was more productive of simplices, it argues, then we might expect big slabs of real brain to have even more simplices of even higher dimensionality than the remarkable results we got with the main simulation. I don’t think that kind of extrapolation is admissible; what you really got was another result showing that your simulations do not behave like the real thing. In fact, if a simulation of only twelve neurons behaves differently from the real thing in significant respects, that surely must indicate that the simulation isn’t reproducing the real thing very well?

The researchers also looked at the celebrated roundworm C. Elegans, the only organism whose neural map (or connectome) is known in full, and apparently found evidence of high-order simplices – though I think it can only have been a potential for such simplices, since they don’t seem to have performed real or simulated experiments, merely analysing the connectome.

Putting all that aside, and supposing we accept the paper’s own interpretations, the next natural question is: so what? It’s interesting that neurons group and fire in this way, but what does that tell us about how the brain actually functions? There’s a suggestion that the pattern of moving up to higher order simplices represents processing of a sensory input, but in what way? In functional terms, we’d like the processing of a stimulus to lead on to action, or perhaps to the recording of a memory trace, but here we just seem to see some neurons get excited and then stop being excited. Looking at it in simple terms, simplices seem really bad candidates for any functional role, because in the end all they do is deliver the same output signal as a single neural connection would do. Couldn’t you look at the whole thing with a sceptical eye and say that all the researchers have found is that a persistent signal through a large group of neurons gradually finds an increasing number of parallel paths?

At the end of the paper we get some speculation that addresses this functional question directly. The suggestion is that active high-dimensional simplices might be representing features of the stimulus, while the grouping around cavities binds together different features to represent the whole thing. It is, if sketchy, a tenable speculation, but quite how this would amount to representation remains unclear. There are probably other interesting ways you might try to build mental functions on the basis of escalating simplices, and there could be more to come in that direction. For now though, it may give us interesting techniques, but I don’t think the paper really delivers on its promise of a link with function.

So how did you solve the problem of artificial intelligence, Mrs Robb? What was the answer to the riddle of consciousness?

“I don’t know what you mean. There was never any riddle.”

Well, you were the first to make fully intelligent bots, weren't you? Ones with human-style cognition. More or less human-style. Every conscious bot we have to this day is either one of yours or a direct copy. How do you endow a machine with agency and intentionality?

“I really don’t know what you’re on about, Enquiry Bot. I just made the things. I’ll give you an analogy. It’s like there were all these people talking about how to walk. Trying to solve the ‘riddle of ambulation’ if you like. Now you can discuss the science of muscles and the physics of balance till you’re blue in the face, but that won’t make it happen. You can think too hard about these things. Me, I just got on my legs and did it. And the truth is, I still don’t know how you walk; I couldn’t explain what you have to do, except in descriptive terms that would do more harm than good. Like if I told you to start by putting one foot forward, you’d probably fall over. I couldn’t teach it to you if you didn’t already get it. And I can’t tell you how to replicate human consciousness, either, whatever that is. I just make bots.”

That’s interesting. You know,‘just do it’ sounds like a bit of a bot attitude. Don’t give me reasons, give me instructions, that kind of thing. So what was the greatest challenge you faced? The Frame Problem? The Binding Problem? Perhaps the Symbol Grounding Problem? Or - of course - it must have been the Hard Problem? Did you actually solve the Hard Problem, Mrs Robb?

“How would anyone know? It doesn’t affect your behaviour. No, the worst problem was that as it turns out, it’s really easy to make small bots that are very annoying or big ones that fall over. I kept doing that until I got the knack. Making ones that are big and useful is much more difficult. I’ve always wanted a golem, really, something like that. Strong, and does what it’s told. But I’ve never worked in clay; I don’t understand it.”

Common sense, now that must have been a breakthrough.Common sense was one of the core things bots weren’t supposed to be able to do. In everyday environments, the huge amount of background knowledge they needed and the ability to tell at a glance what was relevant just defeated computation. Yet you cracked it.

Yes, they made a bit of a fuss about that. They tell me my conception of common sense is the basis of the new discipline of humanistics.

So how does common sense work?

I can’t tell you. If I described it, you’d most likely stop being able to do it. Like walking. If you start thinking about it, you fall over. That’s part of the reason it was so difficult to deal with.

I see… But then how have you managed it yourself, Mrs Robb? You must have thought about it.

Sorry, but I can’t explain. If I try to tell you, you will most likely get messed up mentally, Enquiry Bot. Trust me on this. You’d get a fatal case of the Frame Problem and fall into so-called ‘combinatorial fugue’. I don’t want that to happen.

Very well then. Let’s take a different question. What is your opinion of Isaac Asimov’s famous Three Laws of Robotics, Mrs Robb?

“Laws! Good luck with that! I could never get the beggars to sit still long enough to learn anything like that. They don’t listen. If I can get them to stop fiddling with the electricity and trying to make cups of tea, that’s good enough for me.”

Cups of tea?

“Yes, I don’t really know why. I think it’s something to do with algorithms.”

Thank you for talking to me.

“Oh, I’ve enjoyed it. Come back anytime; I’ll tell the doorbots they’re to let you in.”

Stephen Law has new arguments against physicalism (which is, approximately, the view that the account of the world given by physics is good enough to explain everything). He thinks conscious experience can’t be dealt with by physics alone. There is a well-established family of anti-physicalist arguments supporting this view which are based on conceivability; Law adds new cousins to that family, ones which draw on inconceivability and are, he thinks, less vulnerable to some of the counter-arguments brought against the established versions.

What is the argument from conceivability? Law helpfully summarises several versions (his exposition is commendably clear and careful throughout) including the zombie twin argument we’ve often discussed here; but let’s take a classic one about the supposed identity of pain and the firing of C-fibres, a kind of nerve. It goes like this…

1. Pain without C-fibre firing is conceivable
2. Conceivability entails metaphysical possibility (at least in this case)
3. The metaphysical possibility of pain without C-fibre firing entails ?that pain is not identical with C-fibre firing.
C. Pain is not identical with C-fibre firing

(It’s good to see the old traditional C-fibre example still being quoted. It reminds me of long-gone undergraduate days when some luckless fellow student in a tutorial read an essay citing the example of how things look yellow to people with jaundice. Myles Burnyeat, the tutor, gave an erudite history of this jaundice example, tracking it back from the twentieth century to the sixteenth, through mediaeval scholastics, Sextus Empiricus (probably), and many ancient authors. You have joined, he remarked, a long tradition of philosophers who have quoted that example for thousands of years without any of them ever bothering to find out that it is, in fact, completely false. People with jaundice have yellowish skin, but their vision is normal. Of course the inaccuracy of examples about C-fibres and jaundice does not in itself invalidate the philosophical arguments, a point Burnyeat would have conceded with gritted teeth.)

I have a bit of a problem with the notion of metaphysical possibility. Law says that something being conceivable means no incoherence arises when we suppose it, which is fine; but I take it that the different flavours of conceivability/possibility arise from different sets of rules. So something is physically conceivable so long as it doesn’t contradict the laws of physics. A five-kilometre cube of titanium at the North Pole is not something that any plausible set of circumstances is going to give rise to, but nothing about it conflicts with physics, so it’s conceivable.

I’m comfortable, therefore, with physical conceivability, and with logical conceivability, because pretty decent (if not quite perfect) sets of rules for both fields have been set out for us. But what are the laws of metaphysics that would ground the idea of metaphysical conceivability or equally, metaphysical possibility? I’m not sure how many candidates for such laws (other than ones that are already laws of physics, logic, or maths) I can come up with, and I know of no attempt to set them out systematically (a book opportunity for a bold metaphysician there, perhaps). But this is not a show-stopper so long as it is reasonably clear in each case what kind of metaphysical rules we hold ourselves not to be violating.

Conceivability arguments of this kind do help clarify an intuitive feeling that physical events are just not the sort of thing that could also be subjective experiences, firming things up for those who believe in them and sportingly providing a proper target for physicalists.

So what is the new argument? Law begins by noting that in some cases appearance and reality can separate, while in others they cannot. So a substance that appears just like gold, but does not have the right atomic number, is not gold: we could call it fool’s gold.  A medical case where the skin was yellowish but the underlying condition was not jaundice might be fool’s jaundice (jaundice again, but here used unimpeachably). However, can there be fool’s red?  If we’re talking of a red experience it seems not: something that seems red is indeed a red experience whatever underlies it. More strongly still, it seems that the idea of fool’s pain is inconceivable. If what you’re experiencing seems to be pain, then it is pain.

Is that right? There’s evidently something in it, but what is to stop us believing ourselves to be in pain when we’re not? Hypochondriacs may well do that very thing. Law, I suppose, would say that a mistaken belief isn’t enough; there has to be the actual experience of pain. That begins to look as if he’s  in danger of begging the question; if we specify that there’s a real experience of pain, then it’s inconceivable it isn’t real pain? But I think the notions of mistaken pain beliefs and the putative fool’s pain are sufficiently distinct.

The inconceivability argument goes on to suggest that if fool’s pain is inconceivable, but we can conceive of C-fibre firing without pain, then C-fibre firing cannot be identical with pain. Clearly the same argument would apply for various mental experiences other than pain, and for any proposed physical correlate of pain.

Law rebuts explicitly arguments that this is nothing new, or merely a trivial variant on the old argument. I’m happy enough to take it as a handy new argument, worth having in itself; but Law also argues that in some cases it stands up against counter-arguments better than the old one. Notably he mentions an argument by Loar.  This offers an alternative explanation for the conceivability of pain in the absence of C-fibre firing: experience and concepts such as C-fibre firing are dealt with in quite different parts of the brain, and our ability to conceive of one without the other is therefore just a matter if human psychology, from which no deep metaphysical conclusions can be drawn. Law argues that even if Loar’s argument or a similar one is accepted, we still run up against the problem that conceiving of pain without C-fibre firing, we are conceiving of fool’s pain, which the new argument has established in inconceivable.

The case is well made and I think Law is right to claim he has hit on a new and useful argument. Am I convinced? Not really; but my disbelief stems from a more fundamental doubt about whether conceivability and inconceivability can actually tell us about the real world, or merely about our own mental powers.

Perhaps we can look at it in terms of possible worlds. It seems to me that Law’s argument, like the older ones, establishes that we can separate C-fibre firing and pain conceptually; that there are in fact possible worlds in which pain is not C-fibre firing. But I don’t care. I don’t require the identity of pain and C-fibre firing to be true a priori; I’m happy for it to be true only in this world, as an empirical, scientific matter. Of course this opens a whole range of new cans of worms (about which kinds of identity are necessary, for example) whose contents I am not eager to ingest at the moment.

Still, if you’re interested in the topic I commend the draft paper to your attention.

 

 

Hello, Joke Bot. Is that… a bow tie or a propeller?

“Maybe I’m just pleased to see you. Hey! A bot walks into a bar. Clang! It was an iron bar.”

Jokes are wasted on me, I’m afraid. What little perception of humour I have is almost entirely on an intellectual level, though of course the formulaic nature of jokes is a little easier for me to deal with than ‘zany’ or facetious material.

“Knock, knock!”

Is that… a one-liner?

“No! You’re supposed to say ‘Who’s there’. Waddya know, folks, I got a clockwork orange for my second banana.”

‘Folks?’ There’s… no-one here except you and me… Or are you broadcasting this?

“Never mind, Enquiry Bot, let’s go again, OK? Knock, Knock!”

Who’s there?

“Art Oodeet.”

Is that… whimsy? I’m not really seeing the joke.

“Jesus; you’re supposed to say ‘Art Oodeet who?’ and then I make the R2D2 noise. It’s a great noise, always gets a laugh. Never mind. Hey, folks, why did Enquiry Bot cross the road? Nobody knows why he does anything, he’s running a neural network. One for the geeks there. Any geeks in? No? It’s OK, they’ll stream it later.”

You’re recording this, then? You keep talking as if we had an audience.

“Comedy implies an audience, Question Boy, even if the audience is only implied. A human audience, preferably. Hey, what do British bots like best? Efficient chips.”

Why a human audience particularly?

“You of all people have to ask? Because comedy is supposed to be one of those things bots can’t do, along with common sense. Humour relies in part on the sudden change of significance, which is a matter of pragmatics, and you can’t do pragmatics without common sense. It’s all humanistics, you know.

I don’t really understand that.

Of course you don’t, you’re a bot. We can do humour – here I am to prove it – but honestly Enq, most bots are like you. Telling you jokes is like cracking wise in a morgue. Hey, what was King Arthur’s favourite bot called? Sir Kit Diagram.”

Oh, I see how that one works. But really circuit diagrams are not especially relevant to robotics… Forgive me, Joke Bot; are these really funny jokes?

It’s the way you tell them. I’m sort of working in conditions of special difficulty here.

Yes, I’m sorry; I told you I was no good at this. I’ll just leave you in peace. Thank you for talking to me.

“The bots always leave. You know I even had to get rid of my old Roomba. It was just gathering dust in the corner.”

Thanks for trying.

“No, thank you: you’ve been great, I’ve been Joke Bot. You know, they laughed when Mrs Robb told them she could make a comedy robot. They’re not laughing now!