Google consciousness

Picture: Google chatbot. Bitbucket I was interested to see this Wired piece recently; specifically the points about how Google picks up contextual clues. I’ve heard before about how Google’s translation facilities basically use the huge database of the web: instead of applying grammatical rules or anything like that, they just find equivalents in parallel texts, or alternatives that people use when searching, and this allows them to do a surprisingly good – not perfect – job of picking up those contextual issues that are the bane of most translation software. At least, that’s my understanding of how it works.  Somehow it hadn’t quite occurred to me before, but a similar approach lends itself to the construction of a pretty good kind of chatbot – one that could finally pass the Turing Test unambiguously.

Blandula Ah, the oft-promised passing of the Turing Test. Wake me up when it happens – we’ve been round this course so many times in the past.

Bitbucket Strangely enough, this does remind me of one of the things we used to argue about a lot in the past.  You’ve always wanted to argue that computers couldn’t match human performance in certain respects in principle. As a last resort, I tried to get you to admit that in principle we could get a computer to hold a conversation with human-level responses just by the brutest of brute force solutions.  You just can a perfect response for every possible sentence. When you get that sentence as input, you send the canned response as output. The longest sentence ever spoken is not infinitely long, and the number of sentences of any finite length is finite; so in principle we can do it.

Blandula I remember: what you could never grasp was that the meaning of a sentence depends on the context, so you can’t devise a perfect response for every sentence without knowing what conversation it was part of.  What would the canned response be to;  ‘What do you mean?’  – to take just one simple example.

Bitbucket What you could never grasp was that in principle we can build in the context, too. Instead of just taking one sentence, we can have a canned response to sets of the last ten sentences if we like – or the last hundred sentences, or whatever it takes. Of course the resources required get absurd, but we’re talking about the principle, so we can assume whatever resources we want.  The point I wanted to make is that by using the contents of the Internet and search enquiries, Google could implement a real-world brute-force solution of broadly this kind.

Blandula I don’t think the Internet actually contains every set of a hundred sentences ever spoken during the history of the Universe.

Bitbucket No, granted; but it’s pretty good, and it’s growing rapidly, and it’s skewed towards the kind of thing that people actually say. I grant you that in practice there will always be unusual contextual clues that the Google chatbot won’t pick up, or will mishandle. But don’t forget that human beings miss the point sometimes, too.  It seems to me a realistic aspiration that the level of errors could fairly quickly be pushed down to human levels based on Internet content.

Blandula It would of course tell us nothing whatever about consciousness or the human mind; it would just be a trick. And a damaging one.  If Google could fake human conversation, many people would ascribe consciousness to it, however unjustifiably. You know that quite poor, unsophisticated chatbots have been treated by naive users as serious conversational partners ever since Eliza, the grandmother of them all. The internet connection makes it worse, because a surprising number of people seem to think that the Internet itself might one day accidentally attain consciousness. A mad idea: so all those people working on AI get nowhere, but some piece of kit which is carefully designed to do something quite different just accidentally hits on the solution? It’s as though Jethro Tull had been working on his machine and concluded it would never be a practical seed-drill; but then realised he had inadvertently built a viable flying machine. Not going to happen. Thing is, believing some machine is a person when it isn’t is not a trivial matter, because you then naturally start to think of people as being no more than machines.  It starts to seem natural to close people down when they cease to be useful, and to work them like slaves while they’re operative. I’m well aware that a trend in this direction is already established, but a successful chatbot would make things much, much, worse.

Bitbucket Well, that’s a nice exposition of the paranoia which lies behind so many of your attitudes. Look, you can talk to automated answering services as it is: nobody gets het up about it, or starts to lose their concept of humanity.

Of course you’re right that a Google chatbot in itself is not conscious. But isn’t it a good step forward?  You know that in the brain there are several areas that deal with speech;  Broca’s area seems to put coherent sentences together while Wernicke’s area provides the right words and sense. People whose Wernicke’s area has been destroyed, but who still have a sound Broca’s area apparently talk fluently and sort of convincingly, but without ever really making sense in terms of the world around them. I would claim that a working Google chatbot is in essence a Broca’s area for a future conscious AI. That’s all I’ll claim, just for the moment.

30 thoughts on “Google consciousness

  1. Hey Peter,

    Great post. I got a random question for you though: if we assume that important things are bilaterally redundant in the brain, then what do you think corresponds to the sense-making Wernicke’s area in the right temporal cortex? It is curious that these two areas are connected by the anterior commissure. In split brain patients, the right hemisphere is capable of synthetic decision making and language comprehension while the right temporal cortex has been implicated in contextual understanding. Julian Jaynes’ hypothesized that the corresponding right Wernicke’s area was the narrative driven language of the gods complex for the bicameral mind, which utilized synthetic decision making and language-coded commands to steer the “dominant” hemisphere in novel situations through the anterior commissure. This seems to accord with Michael Persinger’s research on inducing divine hallucinations through stimulation of the right temporal cortex with his “god helmet”.

    Off topic, but I am curious what you think.

  2. Interesting question, Gary. My neurology is not that great, but I think the assumption of bilateral symmetry doesn’t hold at all for language abilities, so the accurate but dull answer would probably be that comparing the hemispheres like that doesn’t really tell us anything.

    However, it is interesting to speculate and sometimes there does seem to be an evolutionary story about how regions of the brain came to have the functions they do – I believe comparisons with other primates, for example suggest that some of our linguistic abilites squeezed out space devoted to the sense of smell; while the vocal areas of chimps are somewhere else altogether, close to a region associated with the stomach, and one which in human beings governs only involuntary vocalisations and, curiously, swearing. (If someone out there can see I’ve got this all wrong, please correct me.)

    That sort of thing does seem to me to leave open at least a possibility of something along the broadly Janesian lines you mention. I’m a bit wary of attempts to explain divine hallucinations this way because there is a temptation, if you’re an atheist, to over-simplify a bit in order to explain religious impulses away; but still.

    (A waffly answer – sorry)

  3. Interesting question. As many other technological advances, I see internet with all its components, as an extension of our own faculties or senses. In this sense, for me, a google chatbot will be more an extension of our own Broca’s area. Like all the information stored in the internet servers is an extension of our memory capacity, and numerical calculus is an extension of our computing capacity.

    I don’t think it is the beginning of a stand-alone conscious entity, it is rather a mean to create a grid system that overconnects our own self consciousness by increasing the communication, and information management capacities. It is like an artificial nervous system, connecting our nervous systems, plus additional memory storage facilities. But it will based on our conscious nature.

    It might eventually lead to a sort of global consciousness effect as a result of adding the contributions of all internet users.

  4. I guess that with the increasing complexity of the software models of these chat-bots, there will be a point when two chat-bots will emulate some sort of meaningful conversation (And I wonder if these conversations will be identical when the two chat-bots start at the same starting points).
    The other question coming to mind is if Google will ever come close to the “I Robot” scenario where the software will automatically design better hardware running smarter software designing better hardware,,,,, and so on until one day people realize that the “ghost in the machine” is real.
    The main problem they still need to solve is the “chicken or egg”.

  5. It seems to me that there is a profound contradiction inherent in the Turing test that absolutely excludes consciousness. Namely, any machine which always came up with an appropriate answer to every human interrogative would have to be a deterministic device whereas human brains are perfectly comfortable with not knowing. If an interrogator says “I am lying” and the machine replies “How could I possible tell?” this would be a perfectly appropriate answer. But such an answer from a human would be an actual admission of a failure of conscious effort whereas the machines suppossed admission of failure was actually a mechanistic success in that it simply located a semmingly correct answer. I think Turing came to understand this and ultimately came to rue his proposal of the test that bears his name.

  6. Michael:

    “…But such an answer from a human would be an actual admission of a failure of conscious effort…”

    Could you please explain a bit why is it a problem.

  7. Michael: Why do you assume the machine would not have access to the full range of human instincts, intuitions, premonitions, estimates, etc.?

  8. Many humans will fail the Turing test. Mis-identifying a human as a digital algorithm driven machine is more common than you think. Talk to a three year old kid over a computer and see what you think, assuming the kid can type. Then try a four year old, and then move up the age ladder. Then try an high IQ person, then try someone else moving down the IQ score. Try an autistic person, then try some “genius” type who likes to talk funny, and the try some who are institutionalized because of mental issues……and eventually try the entire human spectrum.

    It is the huge variation of the human capacity distribution that dooms the Turing test as a meaningful test. A “normal” human is in fact not a well-defined concept. Then, how can we expect to be able to differentiate a machine from a “normal” human being?

  9. Michael, the Turing test does not require a machine to exhibity any behavior that a human does not exhibit, as you seem to imply. It does not require a machine to “come up with an appropriate answer to every human interrogative”, if by this you mean to demonstrate some “logical behavior” that humans do not exhibit. In fact, if a machine did this it would very likely fail the Turing test: The test does not require a machine to make “good”, “consistent” or “logical” answers to questions. It merely requires a machine to pass itself off as a human, which means that if humans typically demonstrate “happiness in not knowing” in such a test, then machines should be expected to do that too to pass it – regardless of any debate about the internal states of such machines. I happen to have issues with the Turing test, but this one doesn’t work.

  10. It doesn’t seem like the judge is given quite enough attention in most discussions of the Turing test. Isn’t the whole process just a fancy set up to test the judge’s intuition and theory of mind? And isn’t it even fairly limited in its ability to test that?

    I know that the story goes that the computer would have to fool multiple judges at some rate that is better than chance, but isn’t it likely that the judges will vary enough in their methods that taking the average result might not be all that meaningful?

    Isn’t it important to look at exactly how the judge makes their decision? Isn’t it based on whether or not the computer talks like someone they know or someone they could easily imagine? And it’s not even really a matter of how the computer “talks,” because the details of the test limit the conversation to text – which as we know is tremendously different than face to face verbal conversation.

    I think the Turing test is a great game for chatbot programmers to play, and it will probably be helpful in the developement of future generation, natural language based user interfaces. I think it’s validity as a test of consciousness or anything like consciousness is not convincing at all.

    Think about what would happen if you set someone up in a Turing test situation in the role of the judge, but didn’t tell them that it was a Turing test and that one of the people they think they are talking to is actually a computer. It seems like under these conditions the test will be much easier to pass, so long as the criteria is that the judge does not come to suspect that they are talking to a computer, without being given the notion that there is a computer involved in the first place. Does this situation differ in an important way from the real test? If the judge is able to find the non-human when he knows to look for one, vs. if he is able to detect a non-human when he is not expecting to; this is the difference. It seems almost certain that the results would differ in these two tests, yet it does not seem that one form makes a conclusively better “consciousness” detecting system. When the judge suspects the possibility of a computer, he tries tricks that he thinks will fool computers based on the way that he believes computers work. Do you think old ladies who have never owned computers would think of these same tricks off the tops of their heads? Are they not able to judge consciousness because they aren’t familiar enough with computers to have an intuition about what would trick them? Certainly someone who has spent some time thinking about AI and the Turing tests would have some better trick questions than someone who has never given the matter any thought whatsoever?

    Think about how this test might play out between people with different formative characteristic like age or culture. Doesn’t something as simple as the nationality of the person who programmed the chat bot( or the nationality of the population from which the chatbot learned to talk) strongly effect what judges of different nationalities will think? For example, an old woodsman from Newfoundland, Canada judging a chatbot from New Zealand?

    There will be different results in all of these cases. The results do not reflect the true ability of any of the judges to tell a person from a machine. They reflect the persons ability to play one specific text-based game. Consciousness is not at the heart of the matter.

  11. “It is the huge variation of the human capacity distribution that dooms the Turing test as a meaningful test. A “normal” human is in fact not a well-defined concept. Then, how can we expect to be able to differentiate a machine from a “normal” human being?”

    You can’t. Well, you can: artificial intelligence will seemingly become more plausible as we become more artificially intelligent. Or, a little differently, the degree to which we can create good robotics increases as we become more robotic. We have shifting goal posts via feedback. An abstract definition of AI feeds a physical technology, which feeds our abstract definition.

    For instance, people think more computationally as they interact with computational objects. This can lead to the view of a purely computational universe. But computers are just physical representations of an abstraction, an abstraction of “mind”. The mind was around as a “computer” before the computer was around as a “mind”. Nevertheless, computer use has a profound effect on the mind. A mind thus engaged is resolving to a local maximum, from which there is little hope of escape, unless something irrational (i.e. non-computational from the viewpoint of the mind in question) intercedes.

    When a local maximum reached, you can create something that will pass a Turing test. For the tester will be looking in a mirror.

  12. The problem with the Turing test as a “real” test is simply that it is too vague to be of any practical use. Suppose the machine has to fool someone. Whom does it have to fool? What if we get a stupid judge. How long must it maintain the act? Does the conversation last for three minutes or three years? I think, however, that the concept of the test is useful. It should be viewed as involving a philosophical hypothesis – that whatever your standards for determining that human-type minds/intelligence/etc exist, those standards can be based solely on behavior. The Turing test is saying, “Do not worry about what is inside the box. If you see a pattern of inputs and outputs that suggests a mind, this is enough”. The fact that the criteria for deciding whether the pattern of inputs and outputs indicates a mind is left vague does not negate this point.

  13. A candidate Turing test which does not evolve is no Turing test, these days.
    Evolve? Copying with variation plus performance based feedback (combining and pruning).
    Of course, what’s the practical use of machines fooling humans?
    Same as humans fooling humans, only more so.
    From the machine’s point of view (once “it has” one), using (fooling, etc.) humans is the ultimate (only?) game in town, prior to independent space-colonization.

    An “ultimate” Turing hyper-test is already underway,
    and the winner (if any) will leave “us” behind:
    (eventually by “extinction”, meanwhile as pets?).

  14. Actually there are crucial problems in parsing even the “syntax” of a given sentence, because human languages can be very ambiguous and break the rules and the listener can still get the meaning. Then after this primal layer we have the secondary layer of word sense mappings that can also be very ambiguous. Of course, a lot of these problems can be solved through statistics but still we can’t achieve 100% accuracy. What you are talking about is the third layer of pragmatics – the meaning which depends on the context. One cannot solve this well without solving the earlier two problems.

  15. Two Turing test issues:
    Is the test applicable here?
    Is consciousness not really addressed by the TT at all?

    I believe a basic issue in the TT spec was that the machine would (be programmed to) try to fool the interrogator. An Internet chat bot could certainly include such a directive, but I do not see that as a typical part of the goal structure for a typical Internet information provider that uses a chat mode. If the machine was simply trying to be helpful, does it really make sense to ask if the TT applies?

    As far as I know, Turing did not consider the issue of whether the machine trying to pass the test would be a conscious entity. If one were to try to interrogate a conscious entity, and that entity was not trying to fool the interrogator, it seems to me that a few questions about the entity’s conscious experience should quickly make it quite clear whether the entity was human or machine.

  16. I suppose there’s a deeper question here — if you believe in zombies. If you believe, as I do, that zombies are not possible, then it should not be possible to program a machine that would be capable of pretending to be conscious.

  17. The Turing test and Zombie arguments are useful because they proves the possibility that one can build a machine that simulates real time consciousness so they appear to be conscious to an unknowing third party. These entities may simulate neural timing and outer being movement and language and as thought experiments they demonstrate our inability to understand what happens beneath the outer neural level of neural firng and signal transmission within the biological entities. We can say they bring us to the wall or barrier which we need to pass to understand biological consciousness.

  18. LLoyd Rice, I am inclined to agree that zombies are not possible. Nevertheless, I have a question and would be interested in your answer (or that of anyone else for that matter).

    Suppose I pretend to be someone else. I may pretend to be a character in a movie, someone I make up, etc. I allow you to interact me, but I act just like the pretend person. Let’s call this pretend person “Fred” – so I am pretending to be Fred.

    Now, if some AI program could pretend to be Fred well enough, I have little doubt that you would say that Fred’s consciousness was real. When I pretend to be Fred, a computing system (my brain) is producing Fred’s behavior as well. The fact that that system happens to contain a conscious entity who has deliberately chosen to simulate Fred should be irrelevant – because that would imply that some programs – ones describing conscious beings who have decided to “run” other beings can produce zombies.

    So this is the question: If I pretend to be Fred, is Fred’s consciousness real? For example, when an author imagines conversations between characters, and writes them down, are these people real?

    (As another hypothetical, I have this: Suppose I have decided to spend the next week pretending to be Fred. Halfway through the week, someone scans my brain, copies it into a machine and runs an “uploaded” version of me. Now, a computer program is running that simulates my brain pretending to be Fred. The behavior of the system is essentially that of Fred, but you would need to look deep into the program to realize that I was in there, and that the Fred behavior was due to my decision to produce it. This should illustrate what I meant about how declaring Fred unreal seems to be accepting the existence of zombies.)

  19. Paul:” If I pretend to be Fred, is Fred’s consciousness real? For example, when an author imagines conversations between characters, and writes them down, are these people real?”

    Of course not.

    To begin with, Fred would be spontaneous, while at each step you would probably have to think how Fred would do something in order to imitate it.

    Second, up to what extent can you pretend to be Fred, very little.

    “someone scans my brain, copies it into a machine and runs an “uploaded” version of me”

    Considering that could happen, Then you will have you playing Freds role, as it was.

    What makes you think that to pretend something is to be something?

  20. My claim that consciousness cannot be “faked” has no philosophical basis. It rests entirely on the idea that there are a few well-chosen questions that can be used to elicit details of the nature of a person’s consciousness (see long discussions in other pages of this blog). A zombie would have no basis from which to “fake” answers to those questions — in spite of philosophically-based claims by Chalmers, etc., as cited in #18 above.

  21. Vincente, when did I actually claim that “pretending something is to be something”? I was asking about this as a thought experiment, rather than making a claim. What I would point out, however, is that “pretending to be something” is merely a special case of computation that produces a certain type of behavior.

    Now, you could say that that behavior of “Fred” does not correspond to a real Fred because the computation needed to do it is classed as “pretending”, but that would seem to have the risk of implying that Fred is a Zombie. You would seem to be saying that Fred is a zombie because the wrong mechanisms are being used to produce him. That is all that this question was about – whether or not we reject the existence of certain conscious minds, when we know that their apparent existence is merely caused by certain types of computation that involve another conscious mind.

  22. Paul, let me apologise, there is a misunderstanding, that is what I meant, not why you make the claim, but what makes you have the doubt.

    Even to say that a certain behaviour is the result of a computation can be questioned. And to say that consciousness is the result of a computation is definitely a matter of opinion. Lloyd will probably support that idea, I don’t.

    What I am saying is that Fred is just nothing but an object of your imagination, you could imagine that Fred is an “blade runner like” humanoid, would that make a change? Fred is not a zombie eithe.

    Think of this: Could Fred (the one in your mind) have his own imagination? his own feelings and emotions? his own “will”? No, its existence completely depends on you. Fred is just an object in your mind, like a flowerpot could be, for example.

    So the point is that even accepting the validity of your thought experiment, Fred is not executing any computations on his own.

  23. Perhaps in the case of multiple-personality disorders an argument could be made that each personality has its own consciousness, reusing the same brain mechanisms in alternate modes. But this would not be the same as imagining being someone else.

  24. “So the point is that even accepting the validity of your thought experiment, Fred is not executing any computations on his own.”

    Isn’t this the point ? That there are no computers in nature at all ? Nothing that can be referred to as a ‘phenomeonoligical, intrinsic computer’, as all computing is just observer-relative, based upon rules relating arbitrary physical markers to syntactical symbols ?

  25. John, sorry I don’t fully understand your comment, but if you are point out that there is logical flaw in the statement, or it is inconsistent with previous comments, you are probably right. I just wanted to say that Fred definitely has not the same attributes as the owner of the brain in which Fred is imagined.

    “all computing is just observer-relative, based upon rules relating arbitrary physical markers to syntactical symbols”

    I completely agree with you. I am not sure if I would say that syntactical symbols are the right term, but there has to be some kind of “meaning substrate”. I would say: “all computing is just observer-relative, based upon rules relating arbitrary physical markers to meaningful symbols”. In my opinion, “meaning” is a key concept to make a difference between “conscious entities” and “conscious like entities”: AI systems, or humans acting subconsciously.

  26. Vicente

    I was agreeing with you. Computation as a mathematical theory is, as I understand it, effectively a theory of the relationship of symbol sets and is entirely syntactical. It is just not possible to create semantic from syntax, so it is impossible to create meaning from computation. Anybody who has programmed a computer would know that the meaning or intention of a program cannot be discerned from looking at the code (source or binary, comments excluded).

    The meaning can only be discerned by consuming the output of the program which will be in a physical form for full comprehension by a human. The analogy is with electric pulses down telephone lines. They may be meaningless in themselves, but they enable communication. That is what computer programs are : a form of communication, like telephones. In fact of course, most telephone communication is done via computer programs these days, so the analogy is even more apt.

  27. John, I believe you have raised an issue I find crucial. I agree that you cannot create semantic from syntax, but can you have semantic without any syntax?

    Can you have reasoning and understanding without language? Is it possible to have a pure intuition based meaningful conscious experience, or some language structure is always needed to support the meaning?

    When I think of this I usually get in dead-lock loop. I believe sometimes it is possible to intuitively approach a mental scene, I think it is sometimes refered as an innate grasp of understanding.

  28. It seems to me that humans tend to give canned responses too.

    Even more so in the modern politically-correct, watch-your-mouth, society. Most of what people say is the de facto response expected of them in any given situation.

    I can’t even count the numbers of times I’ve heard friends and family toss out ideas of how they would spend their money if they win the lottery. It’s almost always the same from everyone; take care of my kids, by my mother a car, buy a house, take a year off work and travel, then come back and set up a small business for retirement.

    It’s at the point where we don’t even need to listen to each other because the entire script can be seen from the outset.

  29. This blog entry, to me personally, is a classic in philosophy. A year or so ago I developed a concept called ‘Google Consciousness’, and wanted to see what other chatter on the net approached this topic – I read this article a few days later and it sort of set the bar for me.

    My partner and I presented ‘Google Consciousness’ at TEDx in cardiff this year, and apparently the talk has tapped into something. Something I think perhaps you could shed more light on in certain areas than I. It has gone viral (as somewhat predicted in the talk) and has remained the number 1 most popular talk over on ted with over 130k views in a month and counting.

    In spite of all this, I think you will probably hate it 🙂 but wanted to thank you for your wonderful work and invite you to watch it none the less.

    http://www.tedxtalks.ted.com/video/TEDxCARDIFF-Google-Consciousnes;Most-popular

Leave a Reply

Your email address will not be published. Required fields are marked *