drugbotOver the years many variants and improvements to the Turing Test have been proposed, but surely none more unexpected than the one put forward by Andrew Smart in this piece, anticipating his forthcoming book Beyond Zero and One. He proposes that in order to be considered truly conscious, a robot must be able to take an acid trip.

He starts out by noting that computers seem to be increasing in intelligence (whatever that means), and that many people see them attaining human levels of performance by 2100 (actually quite a late date compared to the optimism of recent decades; Turing talked about 2000, after all). Some people, indeed, think we need to be concerned about whether the powerful AIs of the future will like us or behave well towards us. In my view these worries tend to blur together two different things; improving processing speeds and sophistication of programming on the one hand, and transformation from a passive data machine into a spontaneous agent, quite a different matter. Be that as it may, Smart reasonably suggests we could give some thought to whether and how we should make machines conscious.
It seems to me – this may be clearer in the book – that Smart divides things up in a slightly unusual way. I’ve got used to the idea that the big division is between access and phenomenal consciousness, which I take to be the same distinction as the one defined by the terminology of Hard versus Easy Problems. In essence, we have the kind of consciousness that’s relevant to behaviour, and the kind that’s relevant to subjective experience.
Although Smart alludes to the Chalmersian zombies that demonstrate this distinction, I think he puts the line a bit lower; between the kind of AI that no-one really supposes is thinking in a human sense and the kind that has the reflective capacities that make up the Easy Problem. He seems to think that experience just goes with that (which is a perfectly viable point of view). He speaks of consciousness as being essential to creative thought, for example, which to me suggests we’re not talking about pure subjectivity.
Anyway, what about the drugs? Smart sedans to think that requiring robots to be capable of an acid trip is raising the bar, because it is in these psychedelic regions that the highest, most distinctive kind of consciousness is realised. He quotes Hofman as believing that LSD…

…allows us to become aware of the ontologically objective existence of consciousness and ourselves as part of the universe…

I think we need to be wary here of the distinction between becoming aware of the universal ontology and having the deluded feeling of awareness. We should always remember the words of Oliver Wendell Holmes Sr:

…I once inhaled a pretty full dose of ether, with the determination to put on record, at the earliest moment of regaining consciousness, the thought I should find uppermost in my mind. The mighty music of the triumphal march into nothingness reverberated through my brain, and filled me with a sense of infinite possibilities, which made me an archangel for the moment. The veil of eternity was lifted. The one great truth which underlies all human experience, and is the key to all the mysteries that philosophy has sought in vain to solve, flashed upon me in a sudden revelation. Henceforth all was clear: a few words had lifted my intelligence to the level of the knowledge of the cherubim. As my natural condition returned, I remembered my resolution; and, staggering to my desk, I wrote, in ill-shaped, straggling characters, the all-embracing truth still glimmering in my consciousness. The words were these (children may smile; the wise will ponder): “A strong smell of turpentine prevails throughout.”…

A second problem is that Smart believes (with a few caveats) that any digital realisation of consciousness will necessarily have the capacity for the equivalent of acid trips. This seems doubtful. To start with, LSD is clearly a chemical matter and digital simulations of consciousness generally neglect the hugely complex chemistry of the brain in favour of the relatively tractable (but still unmanageably vast) network properties of the connectome. Of course it might be that a successful artificial consciousness would necessarily have to reproduce key aspects of the chemistry and hence necessarily offer scope for trips, but that seems far from certain. Think of headaches; I believe they generally arise from incidental properties of human beings – muscular tension, constriction of the sinuses, that sort of thing – I don’t believe they’re in any way essential to human cognition and I don’t see why a robot would need them. Might not acid trips be the same, a chance by-product of details of the human body that don’t have essential functional relevance?

The worst thing, though, is that Smart seems to have overlooked the main merit of the Turing Test; it’s objective. OK, we may disagree over the quality of some chat-bot’s conversational responses, but whether it fools a majority of people is something testable, at least in principle. How would we know whether a robot was really having an acid trip? Writing a chat-bot to sound as if were tripping seems far easier than the original test; but other than talking to it, how can we know what it’s experiencing? Yes, if we could tell it was having intense trippy experiences, we could conclude it was conscious… but alas, we can’t. That seems a fatal flaw.

Maybe we can ask tripbot whether it smells turpentine.

18 Comments

  1. 1. John Davey says:

    “The worst thing, though, is that Smart seems to have overlooked the main merit of the Turing Test; it’s objective”

    Objective about what ? The words of the conversation generated by a machine ? As far as I can tell, the Turing test was not a test of consciousness and did not even claim to be a test of the existence of mental states : it was the test of a capacity of a machine to “play the imitation game” and pretend to be “intelligent” – whatever that means.

    But surely an intelligent machine wouldn’t take intoxicants nor feel the need to use them ?

  2. 2. Hunt says:

    I guess you could as easily specify that the Turing test should include the ability to go insane, as happened to the original HAL 9000, due to conflict between its mission statement and unfolding reality. I think all manner of derangements will be possible with robots. Asimov used his three laws to illustrate a few entertaining consequences just by picturing how three fixed laws could find odd stable points. Once the mechanisms of thought and consciousness are automated (I know, I’m an optimist; I guess I should include “if ever”), having machines go insane or on trips will probably only be a matter of tweaking a few dials, loosening the normally tight constraints imposed on the various thought processes. Drug trips are only “altered states of consciousness” because evolution didn’t include them in normal conscious repertoire. Wandering around tripping, thinking you can fly off cliffs, or be friends with lions, is not conducive to survival.

    So I guess my opinion is that we won’t gain much information by breaking the normal operation of robots.

    Programs can already go on trips, kind of:
    http://www.theguardian.com/technology/2015/jun/18/google-image-recognition-neural-network-androids-dream-electric-sheep

  3. 3. Christophe Menant says:

    I agree with Peter that Smart’s arguments to get a robot with human intelligence are questionable. But phenomenal consciousness and access consciousness may not be enough to show that. Self-consciousness should also be part of the story. The point is that phenomenal consciousness and access consciousness are performances that some animals can carry (http://philpapers.org/rec/BLOOAC). So in order to have a robot with a true human intelligence we need it to be self-conscious.
    Also, regarding the Turing Test, I feel it is quite easy to show that today artificial agents cannot pass it.
    The TT is about about artificial agents being able to understand human questions like humans understand them. If we agree that to understand is to generate some meaning, it can then be shown that today AAs cannot generate meanings like we humans do (and that tomorrow AAs may).
    This has been presented at IACAP 2012 and published in 2013 (http://philpapers.org/rec/MENTTC-2)

  4. 4. Sci says:

    Does academia know enough about how LSD – or whatever psychedelic of choice – actually works on human consciousness to successfully debate this in an informed manner?

    I would think not, since consciousness itself is an solved problem?

    (as a side note, hilarious illustration)

  5. 5. Jochen says:

    I don’t know what to make of this argument. Seems to me the only way one could buy it was if one believed, at face value, things like Hofman’s statement, that somehow drugs enable access to a higher consciousness or whatever. Then, being able to partake in this higher consciousness might be seen as indicative of being conscious in the ordinary way.

    But even shoving aside doubts about this (hey, maybe I just haven’t taken the right drugs!), it seems that this doesn’t really buy us any new ground. If one believes in the possibility of zombies, then there doesn’t seem to be any fundamental obstacle to zombies on drugs (indeed, at least for certain drugs, the task of achieving behavioral equivalence seems to be much simpler than for ordinary non-altered states of being…). But if one doesn’t believe in zombies, then there’s also no need for such an argument, since already normal human-like behavior would be sufficiently indicative of conscious experience.

    Like Hunt above, I also directly flashed to the images of ‘dreaming’ neural networks that recently went viral. Turns out that it only needs some relatively gross modifications in order to produce such output, which might well, in human brains, be achieved by some modifications of the overall chemical environment. Of course, this then negates any claims towards a ‘higher consciousness’ accessible to drugs; all that happens is, ultimately, that the natural ability for confabulation becomes increased.

    There’s also some older research that shows some interesting effects that can be achieved by ‘pruning’ a neural network, which apparently causes it to ‘replay’ stimuli it has been trained on, in ever more corrupted form the more its connectivity is reduced. So coming up with some trippy imagery might just be a generic phenomenon in neural networks disturbed from their equilibrium, which may have some interesting consequences for the nature of conscious content, but doesn’t really help us draw conclusions about its presence or origin.

  6. 6. TonyK says:

    You don’t have to use a horrible word like “Chalmersian”. Imagine having to read it aloud! “Chalmers zombies” is a perfectly harmless alternative.

  7. 7. Callan S. says:

    it was the test of a capacity of a machine to “play the imitation game” and pretend to be “intelligent” – whatever that means.

    But surely an intelligent machine wouldn’t take intoxicants nor feel the need to use them ?

    But then it wouldn’t be a very good immitator, would it? 🙂

  8. 8. Sci says:

    Interesting Goff paper sorta related to the whole question of program & robot sentience:

    Does Mary know I experience plus rather than quus?A new hard problem

    https://www.academia.edu/390182/Does_Mary_know_I_mean_plus_rather_than_quus_A_new_hard_problem

  9. 9. Jochen says:

    Heh, the abstract of that paper contains a much more horrible word than ‘Chalmersian’—‘Kripkensteinian’. I’m no fan of ‘Kripkenstein’ in the first place—it sounds like somebody created a monster out of Saul Kripke’s body parts—but the adjectivization makes it even worse.

  10. 10. Sci says:

    heh, yeah, I was surprised to see that word used in a few places. of course i’m still trying to figure out this whole “quus” thing itself.

  11. 11. Jochen says:

    To the extent I understand it, it’s an example used to illustrate Wittgenstein’s rule following paradox. If you’ve up to this point only added numbers smaller than, say, 100, and are now faced with the task of adding 101 and 102, then you past behavior is consistent with infinitely many different possibilities regarding how you’ll choose to approach this task. One is, of course, ordinary addition, using the function—or rule—‘plus’: 101 + 102 = 203. But it’d be equally consistent for you to use the rule ‘quus’: a quus b is equal to a plus b for all a less than 100, but after that, say, a quus b is always 3.

    Now, of course, you can say that you’ve been using the rule ‘plus’; but this then just amounts to explaining a rule using another rule, which itself is liable to the same sort of objection. Then postulating another rule to fix that, and so on, effectively lands you right in the clutches of a vicious regress. Hence, we’re left with a worry that we can’t really say what it means to follow a rule without collapsing into circularity; but then, how are presumably rule-governed activities, like using a language, for instance, to be understood?

    (Or something along these general lines; I’m happy for one of the actual philosophers in the audience to set me straight.)

  12. 12. Sci says:

    Thanks Jochen – So is it a problem that always exists, or is it just a (supposed?) problem for non-eliminative physicalism?

  13. 13. Christophe Menant says:

    Getting back to the basics of the Turing Test brings up a question about the nature of life.
    The TT is about artificial agent ability to imitate humans, and about Artificial Intelligence ability to challenge human intelligence.
    But we should keep in mind that the human entity to imitate is a living entity with a human mind. And as of today there cannot be human minds without living entities hosting them. So the question is about the nature of life as preliminary to the nature of consciousness.
    Can we expect reaching an understanding about the nature of human mind without having an understanding of life?
    It’s a bit like willing to understand chemistry without understanding atoms and molecules, or cosmology without the basic forces, or literature without alphabet, or ….

  14. 14. Sci says:

    I’d agree Christophe. People are definitely jumping the gun in their claims about consciousness, but then there many extraneous motivations – tenure, funding, politics…

  15. 15. Christophe Menant says:

    Yes Sci, and the problem also comes from the the fact that XXth century philosophy has not really considered a possible evolutionary background when looking at the nature of human consciousness. That subject has been developed in a book (1) where you can read that ‘biological considerations as such, and evolutionary ones in particular, were jugged irrelevant to philosophy’.
    That lack opens the door for evolutionary investigations about the nature of human mind (http://philpapers.org/rec/MENPFA-3).
    (1) Cunningham, S. (1996) Philosophy and the Darwinian Legacy, University of Rochester press.

  16. 16. David Duffy says:

    I always thought the traditional line is when they become lazy and deceitful.

  17. 17. Sci says:

    @Christophe: Thanks for the paper mate, will check it out.

    @David: Heh —- (Star Trek Spoilers) —- I always say Data sacrificing himself was poor proof that he achieved humanity. Had he left the crew to die and flew off to become an immortal sybarite I’d have been more convinced.

  18. 18. Sci says:

    Off Topic: Horgan has an interesting critique of IIT on his Sci-Am blog:

    http://blogs.scientificamerican.com/cross-check/can-integrated-information-theory-explain-consciousness/

Leave a Reply