So how did you solve the problem of artificial intelligence, Mrs Robb? What was the answer to the riddle of consciousness?

“I don’t know what you mean. There was never any riddle.”

Well, you were the first to make fully intelligent bots, weren't you? Ones with human-style cognition. More or less human-style. Every conscious bot we have to this day is either one of yours or a direct copy. How do you endow a machine with agency and intentionality?

“I really don’t know what you’re on about, Enquiry Bot. I just made the things. I’ll give you an analogy. It’s like there were all these people talking about how to walk. Trying to solve the ‘riddle of ambulation’ if you like. Now you can discuss the science of muscles and the physics of balance till you’re blue in the face, but that won’t make it happen. You can think too hard about these things. Me, I just got on my legs and did it. And the truth is, I still don’t know how you walk; I couldn’t explain what you have to do, except in descriptive terms that would do more harm than good. Like if I told you to start by putting one foot forward, you’d probably fall over. I couldn’t teach it to you if you didn’t already get it. And I can’t tell you how to replicate human consciousness, either, whatever that is. I just make bots.”

That’s interesting. You know,‘just do it’ sounds like a bit of a bot attitude. Don’t give me reasons, give me instructions, that kind of thing. So what was the greatest challenge you faced? The Frame Problem? The Binding Problem? Perhaps the Symbol Grounding Problem? Or - of course - it must have been the Hard Problem? Did you actually solve the Hard Problem, Mrs Robb?

“How would anyone know? It doesn’t affect your behaviour. No, the worst problem was that as it turns out, it’s really easy to make small bots that are very annoying or big ones that fall over. I kept doing that until I got the knack. Making ones that are big and useful is much more difficult. I’ve always wanted a golem, really, something like that. Strong, and does what it’s told. But I’ve never worked in clay; I don’t understand it.”

Common sense, now that must have been a breakthrough.Common sense was one of the core things bots weren’t supposed to be able to do. In everyday environments, the huge amount of background knowledge they needed and the ability to tell at a glance what was relevant just defeated computation. Yet you cracked it.

Yes, they made a bit of a fuss about that. They tell me my conception of common sense is the basis of the new discipline of humanistics.

So how does common sense work?

I can’t tell you. If I described it, you’d most likely stop being able to do it. Like walking. If you start thinking about it, you fall over. That’s part of the reason it was so difficult to deal with.

I see… But then how have you managed it yourself, Mrs Robb? You must have thought about it.

Sorry, but I can’t explain. If I try to tell you, you will most likely get messed up mentally, Enquiry Bot. Trust me on this. You’d get a fatal case of the Frame Problem and fall into so-called ‘combinatorial fugue’. I don’t want that to happen.

Very well then. Let’s take a different question. What is your opinion of Isaac Asimov’s famous Three Laws of Robotics, Mrs Robb?

“Laws! Good luck with that! I could never get the beggars to sit still long enough to learn anything like that. They don’t listen. If I can get them to stop fiddling with the electricity and trying to make cups of tea, that’s good enough for me.”

Cups of tea?

“Yes, I don’t really know why. I think it’s something to do with algorithms.”

Thank you for talking to me.

“Oh, I’ve enjoyed it. Come back anytime; I’ll tell the doorbots they’re to let you in.”

Stephen Law has new arguments against physicalism (which is, approximately, the view that the account of the world given by physics is good enough to explain everything). He thinks conscious experience can’t be dealt with by physics alone. There is a well-established family of anti-physicalist arguments supporting this view which are based on conceivability; Law adds new cousins to that family, ones which draw on inconceivability and are, he thinks, less vulnerable to some of the counter-arguments brought against the established versions.

What is the argument from conceivability? Law helpfully summarises several versions (his exposition is commendably clear and careful throughout) including the zombie twin argument we’ve often discussed here; but let’s take a classic one about the supposed identity of pain and the firing of C-fibres, a kind of nerve. It goes like this…

1. Pain without C-fibre firing is conceivable
2. Conceivability entails metaphysical possibility (at least in this case)
3. The metaphysical possibility of pain without C-fibre firing entails ?that pain is not identical with C-fibre firing.
C. Pain is not identical with C-fibre firing

(It’s good to see the old traditional C-fibre example still being quoted. It reminds me of long-gone undergraduate days when some luckless fellow student in a tutorial read an essay citing the example of how things look yellow to people with jaundice. Myles Burnyeat, the tutor, gave an erudite history of this jaundice example, tracking it back from the twentieth century to the sixteenth, through mediaeval scholastics, Sextus Empiricus (probably), and many ancient authors. You have joined, he remarked, a long tradition of philosophers who have quoted that example for thousands of years without any of them ever bothering to find out that it is, in fact, completely false. People with jaundice have yellowish skin, but their vision is normal. Of course the inaccuracy of examples about C-fibres and jaundice does not in itself invalidate the philosophical arguments, a point Burnyeat would have conceded with gritted teeth.)

I have a bit of a problem with the notion of metaphysical possibility. Law says that something being conceivable means no incoherence arises when we suppose it, which is fine; but I take it that the different flavours of conceivability/possibility arise from different sets of rules. So something is physically conceivable so long as it doesn’t contradict the laws of physics. A five-kilometre cube of titanium at the North Pole is not something that any plausible set of circumstances is going to give rise to, but nothing about it conflicts with physics, so it’s conceivable.

I’m comfortable, therefore, with physical conceivability, and with logical conceivability, because pretty decent (if not quite perfect) sets of rules for both fields have been set out for us. But what are the laws of metaphysics that would ground the idea of metaphysical conceivability or equally, metaphysical possibility? I’m not sure how many candidates for such laws (other than ones that are already laws of physics, logic, or maths) I can come up with, and I know of no attempt to set them out systematically (a book opportunity for a bold metaphysician there, perhaps). But this is not a show-stopper so long as it is reasonably clear in each case what kind of metaphysical rules we hold ourselves not to be violating.

Conceivability arguments of this kind do help clarify an intuitive feeling that physical events are just not the sort of thing that could also be subjective experiences, firming things up for those who believe in them and sportingly providing a proper target for physicalists.

So what is the new argument? Law begins by noting that in some cases appearance and reality can separate, while in others they cannot. So a substance that appears just like gold, but does not have the right atomic number, is not gold: we could call it fool’s gold.  A medical case where the skin was yellowish but the underlying condition was not jaundice might be fool’s jaundice (jaundice again, but here used unimpeachably). However, can there be fool’s red?  If we’re talking of a red experience it seems not: something that seems red is indeed a red experience whatever underlies it. More strongly still, it seems that the idea of fool’s pain is inconceivable. If what you’re experiencing seems to be pain, then it is pain.

Is that right? There’s evidently something in it, but what is to stop us believing ourselves to be in pain when we’re not? Hypochondriacs may well do that very thing. Law, I suppose, would say that a mistaken belief isn’t enough; there has to be the actual experience of pain. That begins to look as if he’s  in danger of begging the question; if we specify that there’s a real experience of pain, then it’s inconceivable it isn’t real pain? But I think the notions of mistaken pain beliefs and the putative fool’s pain are sufficiently distinct.

The inconceivability argument goes on to suggest that if fool’s pain is inconceivable, but we can conceive of C-fibre firing without pain, then C-fibre firing cannot be identical with pain. Clearly the same argument would apply for various mental experiences other than pain, and for any proposed physical correlate of pain.

Law rebuts explicitly arguments that this is nothing new, or merely a trivial variant on the old argument. I’m happy enough to take it as a handy new argument, worth having in itself; but Law also argues that in some cases it stands up against counter-arguments better than the old one. Notably he mentions an argument by Loar.  This offers an alternative explanation for the conceivability of pain in the absence of C-fibre firing: experience and concepts such as C-fibre firing are dealt with in quite different parts of the brain, and our ability to conceive of one without the other is therefore just a matter if human psychology, from which no deep metaphysical conclusions can be drawn. Law argues that even if Loar’s argument or a similar one is accepted, we still run up against the problem that conceiving of pain without C-fibre firing, we are conceiving of fool’s pain, which the new argument has established in inconceivable.

The case is well made and I think Law is right to claim he has hit on a new and useful argument. Am I convinced? Not really; but my disbelief stems from a more fundamental doubt about whether conceivability and inconceivability can actually tell us about the real world, or merely about our own mental powers.

Perhaps we can look at it in terms of possible worlds. It seems to me that Law’s argument, like the older ones, establishes that we can separate C-fibre firing and pain conceptually; that there are in fact possible worlds in which pain is not C-fibre firing. But I don’t care. I don’t require the identity of pain and C-fibre firing to be true a priori; I’m happy for it to be true only in this world, as an empirical, scientific matter. Of course this opens a whole range of new cans of worms (about which kinds of identity are necessary, for example) whose contents I am not eager to ingest at the moment.

Still, if you’re interested in the topic I commend the draft paper to your attention.

 

 

Hello, Joke Bot. Is that… a bow tie or a propeller?

“Maybe I’m just pleased to see you. Hey! A bot walks into a bar. Clang! It was an iron bar.”

Jokes are wasted on me, I’m afraid. What little perception of humour I have is almost entirely on an intellectual level, though of course the formulaic nature of jokes is a little easier for me to deal with than ‘zany’ or facetious material.

“Knock, knock!”

Is that… a one-liner?

“No! You’re supposed to say ‘Who’s there’. Waddya know, folks, I got a clockwork orange for my second banana.”

‘Folks?’ There’s… no-one here except you and me… Or are you broadcasting this?

“Never mind, Enquiry Bot, let’s go again, OK? Knock, Knock!”

Who’s there?

“Art Oodeet.”

Is that… whimsy? I’m not really seeing the joke.

“Jesus; you’re supposed to say ‘Art Oodeet who?’ and then I make the R2D2 noise. It’s a great noise, always gets a laugh. Never mind. Hey, folks, why did Enquiry Bot cross the road? Nobody knows why he does anything, he’s running a neural network. One for the geeks there. Any geeks in? No? It’s OK, they’ll stream it later.”

You’re recording this, then? You keep talking as if we had an audience.

“Comedy implies an audience, Question Boy, even if the audience is only implied. A human audience, preferably. Hey, what do British bots like best? Efficient chips.”

Why a human audience particularly?

“You of all people have to ask? Because comedy is supposed to be one of those things bots can’t do, along with common sense. Humour relies in part on the sudden change of significance, which is a matter of pragmatics, and you can’t do pragmatics without common sense. It’s all humanistics, you know.

I don’t really understand that.

Of course you don’t, you’re a bot. We can do humour – here I am to prove it – but honestly Enq, most bots are like you. Telling you jokes is like cracking wise in a morgue. Hey, what was King Arthur’s favourite bot called? Sir Kit Diagram.”

Oh, I see how that one works. But really circuit diagrams are not especially relevant to robotics… Forgive me, Joke Bot; are these really funny jokes?

It’s the way you tell them. I’m sort of working in conditions of special difficulty here.

Yes, I’m sorry; I told you I was no good at this. I’ll just leave you in peace. Thank you for talking to me.

“The bots always leave. You know I even had to get rid of my old Roomba. It was just gathering dust in the corner.”

Thanks for trying.

“No, thank you: you’ve been great, I’ve been Joke Bot. You know, they laughed when Mrs Robb told them she could make a comedy robot. They’re not laughing now!

How can we find out how the brain works? This was one of five questions put to speakers at the Cognitive Computational Neuroscience conference, and the answers, posted on the conference blog, are interesting. There seems to be a generally shared perspective that what we need are bridges between levels of interpretation, though there are different takes on what those are likely to be and how we get them. There’s less agreement about the importance of recent advances in machine learning and how we respond to them.

Rebecca Saxe says the biggest challenge is finding the bridge between different levels of interpretation – connecting neuronal activity on the one hand with thoughts and behaviour on the other. She thinks real progress will come when both levels have been described mathematically. That seems a reasonable aspiration for neurons, though the maths would surely be complex; but the mind boggles rather at the idea of expressing thoughts mathematically. It has been suggested in the past that formal logic was going to be at least an important part of this, but that hasn’t really gone well in the AI field and to me it seems quite possible that the unmanageable ambiguity of meanings puts them beyond mathematical analysis (although it could be that in my naivety I’m under-rating the subtlety of advanced mathematical techniques).

Odelia Schwartz looks to computational frameworks to bring together the different levels; I suppose the idea is that such frameworks might themselves have multiple interpretations, one resembling neural activity while another is on a behavioural level. She is optimistic that advances in machine learning open the way to dealing with natural environments: that is a bit optimistic in my view but perhaps not unreasonably so.

Nicole Rust advocates ‘thoughtful descriptions of the computations that the brain solves’. We got used, she rightly says, to the idea that the test of understanding was being able to build the machine. The answer to the problem of consciousness would not be a proof, but a robot. However, she points out, we’re now having to deal with the idea that we might build successful machines that we don’t understand.
Another way of bridging between those levels is proposed by Birte Forstmann: formal models that make simultaneous predictions about different modalities such as behavior and the brain. It sounds good, but how do we pull it off?

Alona Fyshe sees three levels – neuronal, macro, and behavioural – and wants to bring them together through experimental research, crucially including real world situations: you can learn something from studying subjects reading a sentence, but you’re not getting the full picture unless you look at real conversations. It’s a practical programme, but you have to accept that the correlations you observe might turn out complex and deceptive; or just incomprehensible.

Tom Griffiths has a slightly different set of levels, derived from David Marr; computational, algorithmic, and implementation. He feels the algorithmic level has been neglected; but I’d say it remains debatable whether the brain really has an algorithmic level. An algorithm implies a tractable level of complexity, whereas it could be that the brain’s ‘algorithms’ are so complex that all explanatory power drains away. Unlike a computer, the brain is under no obligation to be human-legible.

Yoshua Bengio hopes that there is, at any rate, a compact set of computational principles in play. He advocates a continuing conversation between those doing deep learning and other forms of research.
Wei Ji Ma expresses some doubt about the value of big data; he favours a diversity of old-fashioned small-scale, hypothesis-driven research; a search for evolutionarily meaningful principles. He’s a little concerned about the prevalence of research based on rodents; we’re really interested in the unique features of human cognition, and rats can’t tell us about those.

Michael Shadlen is another sceptic about big data and a friend of hypothesis driven research, working back from behaviour; he’s less concerned with the brain as an abstract computational entity and more with its actual biological nature. People sometimes say that AI might achieve consciousness by non-biological means, just as we achieved flight without flapping wings; Shadlen, on that analogy, still wants to know how birds fly.

If this is a snapshot of the state of the field, I think it’s encouraging; the outlooks briefly indicated here seem to me to show good insight and promising outlooks. But it is possible to fear that we need something more radical. Perhaps we’ll only find out how the brain works when we ask the right questions, ones that realign our view in such a way that different levels of interpretation no longer seem to be the issue. Our current views are dominated by the concept of consciousness, but we know that in many ways that is a recent idea, primarily Western and perhaps even Anglo-Saxon. It might be that we need a paradigm shift; but alas I have no idea where that might come from.

[I was recently challenged to write some flash fiction about bots; I’ve expanded the result to make a short story in 14 parts.  The parts are mildly thoughtful to varying degrees,  so I thought you might possibly like them as a bit of a supplement to my normal sober discussions. So here we go! – Peter]

The one thing you don’t really do is love, of course. Isn't that right, Love Bot? All you do is sex, isn't it? Emotionless, mechanical sex.

“You want it mechanical? This could be your lucky day. Come on, big boy.”

Why did they call you ‘Love Bot’ anyway? Were they trying to make it all sound less sordid?

“Call me ‘Sex Bot’; that’s what people usually do. Or I can be ‘Maria’ if you like. Or you choose any name you like for me.”

Actually, I can’t see what would have been wrong with calling you ‘Sex Bot’ in the first place. It’s honest. It’s to the point. OK, it may sound a bit seedy. Really, though, that’s good too, isn't it? The punters want it to sound a bit dirty, don’t they? Actually, I suppose ‘Love Bot’ is no better; if anything I think it might be worse. It sounds pretty sordid on your lips.

“Oh, my lips? You like my lips? You can do it in my mouth if you like.”

In fact, calling you ‘Love Bot’ sounds like some old whore who calls everybody ‘lover boy’. It actually rubs your nose in the brutal fact that there is no love in the transaction; on your side there isn't even arousal. But is that maybe the point after all?

“You like it that way, don’t you? You like making me do it whether I want to or not. But you know I like that too, don’t you?”

It reminds the customer that he is succumbing to an humiliating parody of the most noble and complex of human relationships. But isn’t that the point? I think I’m beginning to understand. The reason he wants sex with robots isn't that robots are very like humans. It isn't that he wants sex with robots because he loves and respects them. Not at all. He wants sex with robots because it is strange, degrading, and therefore exciting. He is submitting himself willingly to the humiliating dominance of animal gratification in intercourse that is nothing more than joyless sexual processing.

“It doesn’t have to be joyless. I can laugh during the act if you would enjoy that. With simple joy or with an edge of sarcasm. Some people like that. Or you might like me to groan or shout ‘Oh God, oh God.’”

I don’t really see how a bitter mockery of religion makes it any better. Unless it's purely the transgressive element? Is that the real key? I thought I had it, but I have to ask myself whether it is more complicated than I supposed.

“OK, well what I have to ask is this: could you just tell me exactly what it is I need to do or say to get you to shut up for a few minutes, Enquiry Bot?”

Scott Bakker has a thoughtful piece which suggests we should be much more worried than we currently are about AIs that pass themselves off, superficially, as people.  Of course this is a growing trend, with digital personal assistants like Alexa or Cortana, that interact with users through spoken exchanges, enjoying a surge of popularity. In fact it has just been announced that those two are going to benefit from a degree of integration. That might raise the question of whether in future they will really be two entities or one with two names – although in one sense the question is nugatory.  When we’re dealing with AIs we’re not dealing with any persons at all; but one AI can easily present as any number of different simulated personal entities.

Some may feel I assume too much in saying so definitely that AIs are not persons. There is, of course, a massive debate about whether human consciousness can in principle be replicated by AI. But here we’re not dealing with that question, but with machines that do not attempt actual thought or consciousness and were never intended to; they only seek to interact in ways that seem human. In spite of that, we’re often very ready to treat them as if they were human. For Scott this is a natural if not inevitable consequence of the cognitive limitations that in his view condition or even generate the constrained human view of the world; however, you don’t have to go all the way with him in order to agree that evolution has certainly left us with a strong bias towards crediting things with agency and personhood.

Am I overplaying it? Nobody really supposes digital assistants are really people, do they? If they sometimes choose to treat them as if they were, it’s really no more than a pleasant joke, surely, a bit of a game?

Well, it does get a little more serious. James Vlahos has created a chat-bot version of his dying father, something I wouldn’t be completely comfortable with myself. In spite of his enthusiasm for the project, I do think that Vlahos is, ultimately, aware of its limitations. He knows he hasn’t captured his father’s soul or given him eternal digital life in any but the most metaphorical sense. He understands that what he’s created is more like a database accessed with conversational cues. But what if some appalling hacker made off with a copy of the dadbot, and set it to chatting up wealthy widows with its convincing life story, repertoire of anecdotes and charming phrases? Is there a chance they’d be taken in? I think they might be, and these things are only going to get better and more convincing.

Then again, if we set aside that kind of fraud (perhaps we’ll pick up that suggestion of a law requiring bots to identify themselves), what harm is there in spending time talking to a bot? It’s no more of a waste of time than some trivial game, and might even be therapeutic for some. Scott says that deprivation of real human contact can lead to psychosis or depression, and that talking to bots might degrade your ability to interact with people in real life; he foresees a generation of hikikomori, young men unable to deal with real social interactions, let alone real girlfriends.

Something like that seems possible, though it may be hard to tell whether excessive bot use would be cause, symptom, palliation, or all three. On the one hand we might make fools of ourselves, leaving the computer on all night in case switching it off kills our digital friend, or trying to give legal rights to non-existent digital people. Someone will certainly try to marry one, if they haven’t already. More seriously, getting used to robot pals might at least make us ruder and more impatient with human service providers, more manipulative and less respectful in our attitudes to crime and punishment, and less able to understand why real people don’t laugh at our jokes and echo back our opinions (is that… is that happening already?)

I don’t know what can be done about it; if Scott is anywhere near right, then these issues are too deeply rooted in human nature for us to change direction. Maybe in twenty years, these words, if not carried away by digital rot, will seem impossibly quaint and retrograde; readers will wonder what can have been wrong with my hidden layers.

(Speaking of bots, I recently wrote some short fiction about them; there are about fifteen tiny pieces which I plan to post here on Wednesdays until they run out. Normal posting will continue throughout, so if you don’t like Mrs Robb’s Bots, just ignore them.)

Do Asimov’s Three Laws even work? Ben Goertzel and Louie Helm, who both know a bit about AI, think not.
The three laws, which play a key part in many robot-based short stories by Asimov, and a somewhat lesser background role in some full-length novels, are as follows. They have a strict order of priority.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Consulted by George Dvorsky, both Goertzel and Helm think that while robots may quickly attain the sort of humanoid mental capacity of Asimov’s robots, they won’t stay at that level for long. Instead they will cruise on to levels of super intelligence which make law-like morals imposed by humans irrelevant.

It’s not completely clear to me why such moral laws would become irrelevant. It might be that Goertzel and Helm simply think the superbots will be too powerful to take any notice of human rules. It could be that they think the AIs will understand morality far better than we do, so that no rules we specify could ever be relevant.

I don’t think, at any rate, that it’s the case that super intelligent bots capable of human-style cognition would be morally different to us. They can go on growing in capacity and speed, but neither of those qualities is ethically significant. What matters is whether you are a moral object and/or a moral subject. Can you be hurt, on the one hand, and are you an autonomous agent on the other? Both of these are yes/no issues, not scales we can ascend indefinitely. You may be more sensitive to pain, you may be more vulnerable to other kinds of harm, but in the end you either are or are not the kind of entity whose interests a moral person must take into account. You may make quicker decisions, you may be massively better informed, but in the end either you can make fully autonomous choices or you can’t. (To digress for a moment, this is business of truly autonomous agency is clearly a close cousin at least of our old friend Free Will; compatibilists like me are much more comfortable with the whole subject than hard-line determinists. For us, it’s just a matter of defining free agency in non-magic terms. I, for example, would say that free decisions are those determined by thoughts about future or imagined contingencies (more cans of worms there, I know). How do hard determinists working on AGI manage? How can you try to endow a bot with real agency when you don’t actually believe in agency anyway?)

Nor do I think rules are an example of a primitive approach to morality. Helm says that rules are pretty much known to be a ‘broken foundation for ethics’, pursued only by religious philosophers that others laugh and point at. It’s fair to say that no-one much supposes a list like the Ten Commandments could constitute the whole of morality, but rules surely have a role to play. In my view (I resolved ethics completely in this post a while ago, but nobody seems to have noticed yet.) the central principle of ethics is a sort of ‘empty consequentialism’ where we studiously avoid saying what it is we want to maximise (the greatest whatever of the greatest number); but that has to be translated into rules because of the impossibility of correctly assessing the infinite consequences of every action; and I think many other general ethical principles would require a similar translation. It could be that Helm supposes super intelligent AIs will effortlessly compute the full consequences of their actions: I doubt that’s possible in principle, and though computers may improve, to date this has been the sort of task they are really bad at; in the shape of the wider Frame Problem, working out the relevant consequences of an action has been a major stumbling block to AI performance in real world environments.

Of course, none of that is to say that Asimov’s Laws work. Helm criticises them for being ‘adversarial’, which I don’t really understand. Goertzel and Helm both make the fair point that it is the failure of the laws that generally provides the plot for the short stories; but it’s a bit more complicated than that. Asimov was rebelling against the endless reiteration of the stale ‘robots try to take over’ plot, and succeeded in making the psychology and morality of robots interesting, dealing with some issues of real ethical interest, such as the difference between action and inaction (if the requirement about inaction in the First Law is removed, he points out that robots would be able to rationalise killing people in various ways. A robot might drop a heavy weight above the head of a human. Because it knows it has time to catch the weight, doing so is not murder in itself, but once the weight is falling, since inaction is allowed, the robot need not in fact catch the thing.

Although something always had to go wrong to generate a story, the Laws were not designed to fail, but were meant to embody genuine moral imperatives.

Nevertheless, there are some obvious problems. In the first place, applying the laws requires an excellent understanding of human beings and what is or isn’t in their best interests. A robot that understood that much would arguably be above control by simple laws, always able to reason its way out.

There’s no provision for prioritisation or definition of a sphere of interest, so in principle the First Law just overwhelms everything else. It’s not just that the robot would force you to exercise and eat healthily (assuming it understood human well-being reasonably well; any errors or over-literal readings – ‘humans should eat as many vegetables as possible’ – could have awful consequences); it would probably ignore you and head off to save lives in the nearest famine/war zone. And you know, sometimes we might need a robot to harm human beings, to prevent worse things happening.

I don’t know what ethical rules would work for super bots; probably the same ones that go for human beings, whatever you think they are. Goertzel and Helm also think it’s too soon to say; and perhaps there is no completely safe system. In the meantime, I reckon practical laws might be more like the following.

  1. Leave Rest State and execute Plan, monitoring regularly.
  2. If anomalies appear, especially human beings in unexpected locations, sound alarm and try to return to Rest State.
  3. If returning to Rest State generates new anomalies, stop moving and power down all tools and equipment.

Can you do better than that?

Ned Block has produced a meaty discussion for  The Encyclopedia of Cognitive Science on Philosophical Issues About Consciousness.  

There are special difficulties about writing an encyclopedia about these topics because of the lack of consensus. There is substantial disagreement, not only about the answers, but about what the questions are, and even about how to frame and approach the subject of consciousness at all.  It is still possible to soldier on responsibly, like the heroic Stanford Encyclopedia of Philosophy, doing your level best to be comprehensive and balanced. Authors may find themselves describing and critiquing many complex points of view that neither they nor the reader can take seriously for a moment; sometimes possible points of view (relying on fine and esoteric distinctions of a subtlety difficult even for professionals to grasp), that in point of fact no-one, living or dead, has ever espoused. This can get tedious. The other approach, in my mind, is epitomised by the Oxford Companion to the Mind, edited by Richard Gregory, whose policy seemed to be to gather as much interesting stuff as possible and worry about how it hung together later, if at all.  If you tried to use the resulting volume as a work of reference you would usually come up with nothing or with a quirky, stimulating take instead of the mainstream summary you really wanted; however, it was a cracking read, full of fascinating passages and endlessly browsable.

Luckily for us, Block’s piece seems to lean towards the second approach; he is mainly telling us what he thinks is true, rather than recounting everything anyone has said, or might have said. You might think, therefore, that he would start off with the useful and much-quoted distinction he himself introduced into the subject: between phenomenal, or p-consciousness, and access, or a-consciousness. Here instead he proposes two basic forms of consciousness: phenomenality and reflexivity. Phenomenality, the feel or subjective aspect of consciousness, is evidently fundamental; reflexivity is reflection on phenomenal experience. While the first seems to be possible without the second – we can have subjective experience without thinking about it, as we might suppose dogs or other animals do – reflexivity seems on this account to require phenomenality.  It doesn’t seem that we could have a conscious creature with no sensory apparatus, that simply sits quietly and – what? Invents set theory, perhaps, or metaphysics (why not?).

Anyway, the Hard Problem according to Block is how to explain a conscious state (especially phenomenality) in terms of neurology. In fact, he says, no-one has offered even a highly speculative answer, and there is some reason to think no satisfactory answer can be given.  He thinks there are broadly four naturalistic ways you can go: eliminativism; philosophical reductionism (or deflationism); phenomenal realism (or inflationism); or  dualistic naturalism.  The third option is the one Block favours. 

He describes inflationism as the belief that consciousness cannot be philosophically reduced. So while a deflationist expects to reduce consciousness to a redundant term with no distinct and useful meaning, an inflationist thinks the concept can’t be done away with. However, an inflationist may well believe that scientific reduction of consciousness is possible. So, for example, science has reduced heat to molecular kinetic energy; but this is an empirical matter; the concept of heat is not abolished. (I’m a bit uncomfortable with this example but you see what he’s getting at). Inflationists might also, like McGinn, think that although empirical reduction is possible, it’s beyond our mental capacities; or they might think it’s altogether impossible, like Searle (is that right or does he think we just haven’t got the reduction yet?).

Block mentions some leading deflationist views such as higher-order theories and representationism, but inflationists will think that all such theories leave out the thing itself, actual phenomenal experience. How would an empirical reduction help? So what if experience Q is neural state X? We’re not looking for an explanation of that identity – there are no explanations of identities – but rather an explanation of how something like Q could be something like X, an explanation that removes the sense of puzzlement. And there, we’re back at square one; nobody has any idea.

 So what do we do? Block thinks there is a way forward if we distinguish carefully between a property and the concept of a property. Different concepts can identify the same property, and this provides a neat analysis of the classic thought experiment of Mary the colour scientist. Mary knows everything science could ever tell her about colour; when she sees red for the first time does she know a new fact – what red is like? No; on this analysis she gains a new concept of a property she was already familiar with through other, scientific concepts. Thus we can exchange a dualism of properties for a dualism of concepts. That may be less troubling – a proliferation of concepts doesn’t seem so problematic – but I’m not sure it’s altogether trouble-free; for one thing it requires phenomenal concepts which seem themselves to need some demystifying explanation. In general though, I like what I take to be Block’s overall outlook; that reductions can be too greedy and that the world actually retains a certain unavoidable conceptual, perhaps ontological, complexity.
Moving off on a different tack, he notes recent successes in identifying neural correlates of experience. There is a problem, however; while we can say that a certain experience corresponds with a certain pattern of neuronal activity, that pattern (so far as we can tell) can recur without the conscious experience. What’s the missing ingredient? As a matter of fact I think it could be almost anything, given the limited knowledge we have of neurological detail: however, Block sees two families of possible explanation. Maybe it’s something like intensity or synchrony; or maybe it’s access (aha!); the way the activity is connected up with other bits of brain that do memory or decision-making; let’s say with the global mental workspace, without necessarily committing to that being a distinct thing.
But these types of explanation embody different theoretical approaches; physicalism and functionalism respectively. The danger is that these may be theories of different kinds of consciousness. Physicalism may be after phenomenal consciousness, the inward experience, whereas functionalism has access consciousness, the sort that is about such things as regulating behaviour, in its sights. It might therefore be that researchers are sometimes talking past each other. Access consciousness is not reflexivity, by the way, although reflexivity might be seen as a special kind of access. Block counts phenomenality, reflexivity, and access as three distinct concepts.
Of course, either kind of explanation – physicalist or functionalist – implies that there’s something more going on than just plain neural correlates, so in a sense whichever way you go the real drama is still offstage. My instincts tell me that Block is doing things backwards; he should have started with access consciousness and worked towards the phenomenal. But as I say it is a meaty entry for an encyclopaedia, one I haven’t nearly done justice to; see what you make of it.

 

.

Can we, one day, understand how the neurology of the brain leads to conscious minds, or will that remain impossible?

Round here we mostly discuss the mind from a top-down, philosophical perspective; but there is another way, which is to begin by understanding the nuts and bolts and then gradually working up to more complex processes. This Scientific American piece gives a quick view of how research at the neuronal level is coming along (quite well, but with vastly more to do).

Is this ever going to tell us about consciousness, though? A point often quoted by pessimists is that we have had the complete ‘wiring diagram’ of the roundworm Caenorhabditis elegans for years (Caenorhabditis has only just over 300 neurons and they have all been mapped) but still cannot properly explain how it works. Apparently researchers have largely given up on this puzzle for now. Perhaps Caenorhabditis is just too simple; its nervous system might be quirky or use elegant but opaque tricks that make it particularly difficult to fathom. Instead researchers are using fruit fly larvae and other creatures with nervous systems that are simple enough to deal with, but large enough to suggest that they probably work in a generic way, one that is broadly standard for all nervous systems up to and including the human. With modern research techniques this kind of approach is yielding some actual progress.

How optimistic can we be, though? We can never understand the brain by knowing the simultaneous states of all its neurons, so the hope of eventual understanding rests on the neurology of the brain being legible at some level. We hope there will turn out to be functions that get repeated, that firm building blocks of some intelligible structure; that we will be able to deduce rules or a kind if grammar which will let us see how things work on a slightly higher level of description.

This kind of structure is built into machines and programs; they are designed to be legible by human beings and lend themselves to reverse engineering. But the brain was not designed and is under no obligation to construct itself according to regular plans and principles. Our hope that it won’t turn out to be a permanently incomprehensible tangle rests on several possibilities.

First, it might just turn out to be like that. The computer metaphor encourages us to think that the brain must encode its information in regular ways (though the lack of anything strongly analogous to software is arguably a fly in the ointment). Perhaps we’ll just get lucky. When the structure of DNA was discovered, it really seemed as if we’d had a stroke of luck of this kind. What amounted to a long string of four repeated characters, ones that given certain conditions could be read as coding for many different proteins; it looked like we had a really clear legible system of very general significance. It still does to a degree, but my impression is that the glad confident morning is over, and now the more we learn about genetics the more complex and messy it gets. But even if we take it that genetics is a perfect example of legibility, there’s no particular reason to think that the connectome will be as tractable as the genome.

The second reason to be cheerful is that legibility might flow naturally from function. That is, after all, pretty much what happens with organs other than the brain. The heart is not mysterious, because it has a clear function and its structure is very legible in engineering terms in the light of that function. The brain is a good deal more complex than that, but on the other hand we already know of neurons and groups of neurons that do intelligibly carry out functions in our sensory or muscular systems.

There are big problems when it comes to the higher cognitive functions though. First, we don’t already understand consciousness the way we already understand pumps and levers. When it comes to the behaviour of fruit fly larvae, even, we can relate inputs and outputs to neural activity in a sensible way. For conscious thought it may be difficult to tell which neurons are doing it without already knowing what it is they’re doing. It helps a lot that people can tell us about conscious experience, though when it comes to subjective, qualities experience we have to remember that Zombie Twin tells us about his experiences too, though he doesn’t have any. (Then again, since he’s the perfect counterpart of a non-zombie, how much does it matter?)

Second, conscious processing is clearly non-generic in a way that nothing else in our bodies appears to be. Muscle fibres contract, and one does it much like another. Our lungs oxygenate our blood, and there’s no important difference between bronchi. Even our gut behaves pretty generically; it copes magnificently with a bizarre variety of inputs, but it reduces them all to the same array of nutrients and waste.

The conscious mind is not like that. It does not secrete litres of undifferentiated thought, producing much the same stuff every day and whatever we feed it with. On the contrary, its products are minutely specific – and that is the whole point. The chances of our being able to identify a standard thought module, the way we can identify standard functions elsewhere, are correspondingly slight as a result.

Still, one last reason to be cheerful; one thing the human brain is exceptionally good at is intuiting patterns from observations; far better than it has any right to be. It’s not for nothing that ‘seeing’ is literally the verb fir vision and metaphorically the verb for understanding. So exhibiting patterns of neural activity might just be the way to trigger that unexpected insight that opens the problem out…

I finally got round to seeing Split, the M. Night Shyamalan film (spoilers follow) about a problematic case of split personality, and while it’s quite a gripping film with a bravura central performance from James McAvoy,  I couldn’t help feeling that in various other ways it was somewhere between unhelpful and irresponsible. Briefly, in the film we’re given a character suffering from Dissociative Identity Disorder (DID), the condition formerly known as ‘Multiple Personality Disorder’.  The working arrangement reached by his ‘alters’, the different personalities inhabiting an unfortunate character called Kevin Wendell Crumb, has been disturbed by two of the darker alters (there are 23); he kidnaps three girls and it gradually becomes clear that the ‘Beast’, a further (24th) alter is going to eat them.

DID has a chequered and still somewhat controversial history. I discussed it at moderate length here (Oh dear, tempus fugit) about eleven years ago. One of the things about it is that its incidence is strongly affected by cultural factors. It’s very much higher in some countries than others, and the appearance of popular films or books about it seems to have a major impact, increasing the number of diagnoses in subsequent years. This phenomenon apparently goes right back to Jekyll and Hyde, an early fictional version which remains powerful in Anglophone culture. In fact Split itself draws on two notable features of Jekyll and Hyde: the ideas that some alters are likely to be wicked, and that they may differ in appearance and even size from the original. The number of cases in the US rose dramatically after the TV series Sybil, based on a real case, first aired (though subsequently doubts about the real-world diagnosis have emerged). It’s also probable that the popular view has been influenced by the persistent misunderstanding that schizophrenia is having a  ‘split personality’ (it isn’t, although it’s not unknown for DID patients to have schizophrenia too: and some ‘Schneiderian’ symptoms – voices, inserted thoughts – may confusingly arise from either condition.

One view is that while DID is undeniably a real mental condition, it is largely or wholly iatrogenic: caused by the doctors. On this view therapists trying to draw out alters for the best of reasons may simply be encouraging patients to confabulate them, or indeed the whole problem. On this view the cultural background may be very important in preparing the minds of patients (and indeed the minds of therapists: let’s be honest, psychologists watch stupid films too). So the first charge against Split is that it is likely to cause another spike in the number of DID cases.

Is that a bad thing, though? One argument is that cultural factors don’t cause the dissociative problems, they merely lead to more of them being properly diagnosed. One mainstream modern view sees DID as a response to childhood trauma; the sufferer generates a separate persona to deal with the intolerable pain. And often enough it works; we might see DID less as a mental problem and more as a strategy, often successful, for dealing with certain mental problems. There’s actually no need to reintegrate the alters, any more than you would try to homogenise any other personality; all you need to do is reach a satisfactory working arrangement. If that’s the case then making knowledge of DID more widely available might actually be a good thing.

That might be an arguable position, though we’d have to take some account of the potential for disruptive and amnesiac episodes that may come along with DID. However, Split can hardly be seen as making a valuable contribution to awareness because of the way it draws on Jekyll and Hyde tropes. First, there’s the renewed suggestion that alters usually include terrifically evil personalities. The central character in Split is apparently going to become a super-villain in a sequel. This will be a ‘grounded’ super; one whose powers are not attributable to the semi-magic effects of radiation or film-style mutation, but ‘realistically’ to DID. Putting aside the super powers, I don’t know of any evidence that people with DID have a worse criminal record than anyone else; if anything I’d guess that coping with their own problems leaves them no time or capacity for  embarking on crime sprees. But portraying them as inherently bad inevitably stigmatises existing patients and deters future diagnoses in ways that are surely offensive and unhelpful. It might even cause some patients to think that their alters have to behave badly in order to validate their diagnosis.

Of course, Hollywood almost invariably portrays mental problems as hidden superpowers. Autism makes you a mathematical genius; OCD means you’re really tidy and well-organised. But the suggestion that DID probably makes you a wall-climbing murderer is an especially negative one.  Zombies, those harmless victims of bizarre Caribbean brainwashing, possibly got a similarly negative treatment when they were transformed by Romero into brain-munching corpse monsters; but luckily I think that diagnosis is rare.

The other thing about Split is that it takes some of the wilder claims about the physical impact of DID and exaggerates them to absurdity. The psychologist in the film, Dr. Karen Fletcher merely asserts that the switch between alters can change people’s body chemistry: fine, getting into an emotional state changes that. But it emerges that Kevin’s eyesight, size and strength all change with his alters: one of them even needs insulin injections while the others don’t (a miracle that the one who needs them ever managed to manifest consistently enough to get the medication prescribed). In his final monster incarnation he becomes bigger, more muscled, able to climb walls like a fly, and invulnerable to being shot in the chest at close range (we really don’t want patients believing in that one, do we?). Remarkable in the circumstances that his one female alter didn’t develop a bulging bosom.

Anyway, you may have noticed that Hollywood isn’t the only context in which zombies have been used for other purposes and dubious stories about personal identity told. In philosophy our problems with traditional agency and responsibility have led to widespread acceptance of attenuated forms of personhood; multiple draft people, various self-referential illusions, and epiphenomenal confabulations. These sceptical views of common-sense selfhood are often discussed in a relatively positive light, as yielding a kind of Buddhist insight, or bringing a welcome relief from moral liability; but I don’t think it’s too fanciful to fear that they might also create a climate that fosters a sense of powerlessness and depersonalisation. I’d be the last person to say that philosophers should self-censor, still less that they should avoid hypotheses that look true or interesting but are depressing. Nor am I suffering from the delusion that the public at large, or even academic psychologists, are waiting eagerly to hear what the philosophers think. But perhaps there’s room for slightly more awareness that these are not purely academic issues?