The Nightmare Scenario

rosetta stoneMicrosoft recently announced the first public beta preview for Skype Translate, a service which will provide immediate translation during voice calls. For the time being only Spanish/English is working but we’re told that English/German and other languages are on the way. The approach used is complex. Deep Neural Networks apparently play a key role in the speech recognition. While the actual translation  ultimately relies on recognising bits of text which resemble those it already knows, the same basic principle applied in existing text translators such as Google Translate, it is also capable of recognising and removing ‘disfluencies’ –  um and ers, rephrasings, and so on, and apparently makes some use of syntactical models, so there is some highly sophisticated processing going on.  It seems to do a reasonable job, though as always with this kind of thing a degree of scepticism is appropriate.

Translating actual speech, with all its messy variability is of course an amazing achievement, much more difficult than dealing with text (which itself is no walk in the park); and it’s remarkable indeed that it can be done so well without the machine making any serious attempt to deal with the meaning of the words it translates. Perhaps that’s a bit too bald: the software does take account of context and as I said it removes some meaningless bits, so arguably it is not ignoring meaning totally. But full-blown intentionality is completely absent.

This fits into a recent pattern in which barriers to AI are falling to approaches which skirt or avoid consciousness as we normally understand it, and all the intractable problems that go with it.  It’s not exactly the triumph of brute force, but it does owe more to processing power and less to ingenuity than we might have expected. At some point if this continues, we’re going to have to take seriously the possibility of our having, in the not-all-that remote future, a machine which mimics human behaviour brilliantly without our ever having solved any of the philosophical problems. Such a robot might run on something like a revival of the frames or scripts of Marvin Minsky or Roger Schank, only this time with a depth and power behind it that would make the early attempts look like working with an abacus. The AI would, at its crudest, simply be recognising situations and looking up a good response, but it would have such a gigantic library of situations and it would be so subtle at customising the details that its behaviour would be indistinguishable from that of ordinary humans for all practical purposes. What would we say about such a robot (let’s call her Sophia, why not since anthropomorphism seems inevitable). I can see several options.

Option one. Sophia really is conscious, just like us. OK, we don’t really understand how we pulled it off, but it’s futile to argue about it when her performance provides everything we could possibly demand of consciousness and passes every test anyone can devise. We don’t argue that photographs are not depictions because they’re not executed in oil paint, so why would we argue that a consciousness created by other means is not the real thing? She achieved consciousness by a different route, and her brain doesn’t work like ours – but her mind does. In fact, it turns out we probably work more like her than we thought: all this talk of real intrinsic intentionality and magic meaningfulness turns out to be a systematic delusion; we’re really just running scripts ourselves!

Option two. Sophia is conscious, but not in the way we are. OK, the results are indistinguishable, but we just know that the methods are different, and so the process is not the same. birds and bats both fly, but they don’t do it the same way. Sophia probably deserves the same moral rights and duties as us, though we need to be careful about that; but she could very well be a philosophical zombie who has no subjective experience. On the other hand, her mental life might have subjective qualities of its own, very different to ours but incommunicable.

Option three. She’s not not conscious; we just know she isn’t, because we know how she works and we know that all her responses and behaviour come from simply picking canned sequences out of the cupboard. We’re deluding ourselves if we think otherwise. But she is the vivid image of a human being and an incredibly subtle and complex entity: she may not be that different from animals whose behaviour is largely instinctive. We cannot therefore simply treat her as a machine: she probably ought to have some kinds of rights: perhaps special robot rights. Since we can’t be absolutely certain that she does not experience real pain and other feelings in some form, and since she resembles us so much, it’s right to avoid cruelty both on the grounds of the precautionary principle and so as not to risk debasing our own moral instincts; if we got used to doling out bad treatment to robots who cried out with human voices, we might get used to doing it to flesh and blood people too.

Option four.  Sophia’s just an entertaining machine, not conscious at all; but that moral stuff is rubbish. It’s perfectly OK to treat her like a slave, to turn her off when we want, or put her through terrible ‘ordeals’ if it helps or amuses us. We know that inside her head the lights are off, no-one home: we might as well worry about dolls. You talk about debasing our moral instincts, but I don’t think treating puppets like people is a great way to go, morally. You surely wouldn’t switch trolleys to save even ten Sophias if it killed one human being: follow that out to its logical conclusion.

Option five. Sophia is a ghastly parody of human life and should be destroyed immediately. I’m not saying she’s actuated by demonic possession (although Satan is pretty resourceful), but she tempts us into diabolical errors about the unique nature of the human spirit.

No doubt there are other options; for me. at any rate, being obliged to choose one is a nightmare scenario. Merry Christmas!

14 thoughts on “The Nightmare Scenario

  1. If Sophia was a Turing Machine implemented by trained mice running through pipes, would anyone give her rights?

    The only way I see her being considered conscious by anyone but Singularity fanatics is if her creation solves the problems of consciousness and intentionality for humans. Which, if possible, takes us into the Intellectual Catastrophe territory previously mentioned.

  2. “we’re going to have to take seriously the possibility of our having, in the not-all-that remote future, a machine which mimics human behaviour brilliantly without our ever having solved any of the philosophical problems”

    And the question is, Peter, how would you know, anyway? Consciousness can only be confirmed by the self. Even if a machine tells you it’s conscious and exhibits all the characteristics, there’s no actual evidence that this is the case – heck, I have no confidence that half the people around me are self-aware 🙂

  3. I tend to agree with your prediction about “pseudo-” conscious machines. I don’t think whether they’re conscious or not will matter, particularly since cheap human substitutes will be driven by market economies, not philosophers. You are GOING to get the robot help-bot, whether you like it or not. I think for everyone with passing familiarity of AI, it’s turning out to be a major disappointment. We were all hoping for an elegant solution to AI that would bring us “self-aware” conscious software and robots. Instead we’re going to have to make due with cheap surrogates until the secrets are finally unlocked, if ever. (Well, they’re not going to be cheap.)

    It’s kind of analogous to cancer research. We were all hoping for the elegant solution. What we got was radiation and chemotherapy.

    Yes, at some point someone is going to ask whether it’s ethical to turn these “things” off, but I suspect that stage won’t come until after you can have a meaningful, deep conversation. Imagine calling a bot on a lonely Christmas eve. It’s quite possible that the sentiment will reside wholly within ourselves, that we imbue the “person” to whom we speak. There’s a wonderful episode from the original Alfred Hitchcock TV program on Netflix starring Claude Rains as a puppet master who falls in love with a puppet character of his own creation. I think things like that will be quite possible and common. Whether it will be healthy or not, I don’t know.

    http://the.hitchcock.zone/wiki/Alfred_Hitchcock_Presents_-_And_So_Died_Riabouchinska

  4. The technical ability to converse is no evidence of the existence of consciousness. The ability to converse is an analyzable task, reproducible via algorithms, robotizable and hence there is no necessary link between the two. Reducing conversation to an algorithm is to turn it (in effect) into a mathematical process. Do we think a calculator conscious because it can add numbers ? That is all a speaking machine is a doing after all.

    The Turing Test remains a first draft of an idea : to take it so seriously after so many years when in reality it is a plainly stupid proposition (not as interpreted by Turing, but by the computationalists later who added ‘consciousness’) is ludicrous.

    Dogs and cats are conscious, despite their lack of elegant conversation. Consciousness has functional features – varying between species one presumes – but it is not a functional, observer relative idea. It is an ineffable natural phenomena, like space or time. It is an ‘is’, not a ‘does’.

  5. John: “It [consciousness] is an ineffable natural phenomena, like space or time. It is an ‘is’, not a ‘does’.”

    Does this mean that everything is conscious? Do you believe that consciousness is not the activity (the doings) of special kinds of brain mechanisms? If so, what is the evidence in support of this claim?

  6. Imagine plugging two chatbots (english and spanish speakers) at both ends of sophia, I bet we’ll listen to a very interesting converstion, full of humor, sarcasm and subtle meanings, innuendos… a feast of metalanguage. If I look a word up in the dictionary nobody would question about the dictionary’s consciousness….what is the real difference? more brute-force eloquence?

  7. The questions of rights always seems to be considered from a position of absolute power on the matter.

    Let’s reverse that – the AI (mouse powered Turing engine or otherwise) has quite a large group.

    Are the AI’s going to grant you/us rights?

    What if they declare that only they are concious and that your wet brains aren’t capable of being concious – you’re all a bunch of zombies?

    It’s quite easy to laugh at that when you consider yourself in a more powerful position. But that’s just treating an argument as true based not on evidence but instead on personal power wielded.

    In the end it seems like religion vs religion – they claim they are the true people of conciousness. You claim you are the true people of conciousness. We had the same trouble dealing with the heathens on the other side of the mountain – they didn’t seem anything human either, at the time.

    Worse is if the AI’s, instead of going all skynet, actually get this better than we do and do actually turn the other cheek. That will be the genuine catastrophe.

  8. @Arnold: “Does this mean that everything is conscious?”

    Well computationalism does seem to imply Panpsychism or even Idealism. Computation, AFAICTell, is something we apply to subsections of physical reality via our own intentionality.

    Mentioned it before but Lanier goes into this in You Can’t Argue With a Zombie:

    http://www.davidchess.com/words/poc/lanier_zombie.html

  9. Sci, thx for the ref. It never occurred to me that maybe Dennett is in fact a zombie, 🙂 , which would in fact explain his position, otherwise a complete absurd to me.

    Arnold,
    Do you believe that consciousness is ONLY the activity (the doings) of special kinds of brain mechanisms? If so, what is the evidence in support of this claim?

    No, I don’t believe it, although that activity, without any doubt, plays a major role in consciousness setup. The evidence is the lack of evidence to support the contrary.

  10. “Does this mean that everything is conscious?”

    No – and I don’t see why the claim that it is an ineffable phenomena would lead to this suggestion.

    “Do you believe that consciousness is not the activity (the doings) of special kinds of brain mechanisms?”

    Consciousness may be caused by brain mechanisms, but can’t be identified with them. They are not synonymous (in my opinion) and clearly not the same thing. Its a bit like an atomic explosion being the result of the rapid interaction of intra-nuclear forces. However there is a difference – an atomic explosion can be modelled easily using mathematical models of intranuclear forces. Conscious phenomena don’t reduce to neurological models at all. That’s not a problem with consciousness : it’s a problem with human cognition. Until there’s a method for mapping the subjective to the objective (not, in my opinion, unsurmountable) consciousness models will remain elusive.

    Consciousness is a phenomena – we all know what it is, which is why we argue so much about what it does. It’s the same with other ineffable conceps like time. Nobody can ever agree on what time ‘consists’ of, but lack of a pleasing definition of time doesn’t mean that we don’t know what we’re on about. It’s just a cognitive feature we’re born with (or stuck with, depending on your point of view).

    It shouldn’t be a surprise to learn that some things defy definition – we are biological animals with a fixed cognitive scope that is limited. We are not angels, or lumps of mathematics.

  11. Look

    http://www.sciencemag.org/content/347/6218/145

    This one is much better… the machine will take all your money playing poker.

    But anyone that has ever played poker will understand that playing with other four fellows has nothing to do with playing with other four “algorithms”, because poker is an experience at the end of the day.

    This is the point… it is not a matter of data statistic processing. And note that I say data not information.

  12. Nice article.

    Personally, I reject the apparent premise that the Turing Test could be passed by looking up a database of appropriate responses to situations. I don’t think that any such system could exhibit the kind of dynamic, creative behaviour we would be looking for in the Turing Test and so I don’t think it would be conscious.

    In general, if a machine can consistently pass a stringent version of the Turing Test, I would be inclined to regard it as conscious no matter how it is implemented.

  13. I’d agree a rote responce system isn’t anything special.

    I think really such a system needs to be able to act like ourselves in that it can experiment on it’s own (the experimentation being via certain algorythms) – then it would report part of it’s findings as its dicussions. The whole whiteroom approach to these tests would really hamstring a genuine AI. Take ten humans, put them in sensory deprivation tanks for a day or two and see how many pass a turing test – perhaps not many. Same for an AI confined to a white room test.

Leave a Reply

Your email address will not be published. Required fields are marked *