ray kurzweilThe Guardian had a piece recently which was partly a profile of Ray Kurzweil, and partly about the way Google seems to have gone on a buying spree, snapping up experts on machine learning and robotics – with Kurzweil himself made Director of Engineering.

The problem with Ray Kurzweil is that he is two people. There is Ray Kurzweil the competent and genuinely gifted innovator, a man we hear little from: and then there’s Ray Kurzweil the motor-mouth, prophet of the Singularity, aspirant immortal, and gushing fountain of optimistic predictions. The Guardian piece praises his record of prediction, rather oddly quoting in support his prediction that by the year 2000 paraplegics would be walking with robotic leg prostheses – something that in 2014 has still not happened. That perhaps does provide a clue to the Kurzweil method: if you issue thousands of moderately plausible predictions, some will pay off. A doubtless-apocryphal story has it that at AI conferences people play the Game of Kurzweil. Players take turns to offer a Kurzweilian prediction (by 2020 there will be a restaurant where sensors sniff your breath and the ideal meal is got ready without you needing to order; by 2050 doctors will routinely use special machines to selectively disable traumatic memories in victims of post-traumatic stress disorder; by 2039 everyone will have an Interlocutor, a software agent that answers the phone for us, manages our investments, and arranges dates for us… we could do this all day, and Kurzweil probably does). The winner is the first person to sneak in a prediction of something that has in fact happened already.

But beneath the froth is a sharp and original mind which it would be all too easy to underestimate. Why did Google want him? The Guardian frames the shopping spree as being about bringing together the best experts and the colossal data resources to which Google has access. A plausible guess would be that Google wants to improve its core product dramatically. At the moment Google answers questions by trying to provide a page from the web where some human being has already given the answer; perhaps the new goal is technology that understands the question so well that it can put together its own answer, gathering and shaping selected resources in very much the way a human researcher working on a bespoke project might do.

But perhaps it goes a little further: perhaps they hope to produce something that will interact with humans in a human-like way.  A piece of software like that might well be taken to have passed the Turing test, which in turn might be taken to show that it was, to all intents and purposes, a conscious entity. Of course, if it wasn’t conscious, that might be a disastrous outcome; the nightmare scenario feared by some in which our mistake causes us to nonsensically award the software human rights, and/or  to feel happier about denying them to human beings.

It’s not very likely that the hypothetical software (and we must remember that this is the merest speculation) would have even the most minimal forms of consciousness. We might take the analogy of Google Translate; a hugely successful piece of kit, but one that produces its translations with no actual understanding of the texts or even the languages involved. Although highly sophisticated, it is in essence a ‘brute force’ solution; what makes it work is the massive power behind it and the massive corpus of texts it has access to.  It seems quite possible that with enough resources we might now be able to produce a credible brute force winner of the Turing Test: no attempt to fathom the meanings or to introduce counterparts of human thought, just a massive repertoire of canned responses, so vast that it gives the impression of fully human-style interaction. Could it be that Google is assembling a team to carry out such a project?

Well, it could be. However, it could also be that cracking true thought is actually on the menu. Vaughan Bell suggests that the folks recruited by Google are honest machine learning types with no ambitions in the direction of strong AI. Yet, he points out, there are also names associated with the trendy topic of deep learning. The neural networks (but y’know, deeper) which deep learning uses just might be candidates for modelling human neuron-style cognition. Unfortunately it seems quite possible that if consciousness were created by deep learning methods, nobody would be completely sure how it worked or whether it was real consciousness or not. This is a lamentable outcome: it’s bad enough to have robots that naive users think are people; having robots and genuinely not knowing whether they’re people or not would be deeply problematic.

Probably nothing like that will happen: maybe nothing will happen. The Guardian piece suggests Kurzweil is a bit of an outsider: I don’t know about that.  Making extravagantly optimistic predictions while only actually delivering much more modest incremental gains? He sounds like the personification of the AI business over the years.


  1. 1. DiscoveredJoys says:

    But what if human neuron-style cognition is itself ‘merely’ ‘brute force’ – but each individual ‘brute force’ consciousness trained by existing ‘brute force’ conciousnesses?

    Though experiment: 150 Google Translate instances, with accuracy ‘rewarded’ and the most accurate instances propagated.

  2. 2. Philosopher Eric says:

    Why would Google, a company with some of the most forward thinking engineers in the world, hire Ray Kurzweil? Was it for improved direction as they claim? I think not. Google knows that Kurzweil gives them something that their talented engineers generally do not: exposure — as this Guardian article obviously demonstrates. He is a man that essentially made the same coup d’etat that Christianity made over Judaism two thousand years ago — namely, “If you help support this vision, you may indeed become immortal.” But it isn’t just that our consciousness will soon be downloaded to computers for eternity that puts him “way out there.” A friend implored me to look him up a couple of years back, but I could not get past the man’s prediction that the entirety of our planet, would be consumed as valuable substrate for our massive computer/self structure. (I see nothing about this on Wikipedia today however, so perhaps I am mistaken here, though perhaps his fans help sweep certain ideas under the carpet? Am I indeed mistaken?)

    Regardless, according to my theory it is the sensations which I now experience that motivate me to attack him (and I do suppose that jealousy is prominent). What I now need to figure out is how to become a famous theorist, as he clearly is, without the aid of “crazy ideas.”

  3. 3. Hunt says:

    Well, I kind of think it’s time for another Apollo Project for AI anyway. I think every decade or so, a lot of time, money and effort should be thrown at it. Perhaps this time there will be a breakthrough. I think that’s kind of what people sense is in the air. Personally, I would be satisfied with a more Watson-like web search capacity (so long as the answers didn’t come back in the form of a question). I think the vision of the “semantic web” has largely not come to fruition, for whatever reason. The inescapable conclusion I make for current web search engines is that they’re pitifully dumb. Someone once offered the advice that we should ask for exactly what they’re looking for by providing an extensive natural language sentence to search engines. This ignores the fact that search engines have either scant or no understanding of natural language–no semantic analysis, all syntactic analysis. So if I provide Google with “Belt length for Honda lawn mower with serial number #1134-4444-0001,” I’m likely to get back ten million hits of gibberish. If I DO get something intelligible back, it’s only due to fortuitous word sequence similarity. I would be happy if that were improved.

  4. 4. Jorge says:

    The most important acquisition Google made in the past year was Geoffrey Hinton. His work on backpropagation is pretty much instrumental in the way “artificial intelligence” is being developed now.

    Here is a demo he gave a while back, and I thought it was pretty impressive. Obviously, Google agreed.

  5. 5. Daniel says:

    Peter (the man behind the site), who the hell are you? I want to follow everything else you’re doing. Maybe we can exchange some ideas.

  6. 6. Peter says:


    I’m just someone with a blog (I should probably put a few more personal details on the About page). Email is

  7. 7. Philosopher Eric says:

    Hunt you’ve got me thinking about this AI business. Of course we know computers are great for arithmetic or playing a game of chess — and that a supercomputer can also play a good game of Jeopardy. But I dont think even Watson is really doing much of what we do. I assume it was just programmed with millions of “game show zingers” — not exactly helpful for your real world mower belt query.

    One thing about those AI guys is that with all of their technical knowledge, they remain just as ignorant about human consciousness as Psychologists and Philosophers still are. But if we happen to be conscious ourselves, why can’t we at least build a useful model of the conscious human mind? The idea would be to start basic enough to be accurate, and then keep making improvements to address more complex dynamics. Of course thousands and thousands of pages have been written on the subject of consciousness, and very little ultimately seems to “stick.” So it’s hopeless right?

    I certainly don’t think so. As I’ve mentioned here before, my brother was able to discuss consciousness with me quite intelligently after reading my consciousness chapter alone. Though it is complex… no more so than necessary (as its simple diagram suggest). You should be able to download this PDF from the site under my name. If you have any improvements to suggest, I’d then love to discuss them.

  8. 8. Callan S. says:

    Jorge, how much is that a talk that relies on an audience already educated as to what his jargon means? It kind of sounds like he spins alot of concepts past very fast – I think he’s doing research. I just wonder how much google knows he’s doing research vs ‘lots of concepts going by fast – this has to be important!’. How educated is his audience with respect to his particular jargon?

  9. 9. Hunt says:

    Philosopher Eric, I’m not so sure there are many efforts to capture consciousness in silico though doubtless there are a few. For a company like Google, the ethical issues of instantiating consciousness in search engine might be disastrous. Opinions vary, but I for one think that “intelligence” can be captured without consciousness, perhaps even very, very advanced intelligence, without a whit of consciousness, intention or ability to perceive qualia, in other words, without all the half-understood (possibly, in some cases, even meaningless) baggage of philosophy of mind. That was always the goal of AI, even before conscious came onto the radar screens of most philosophers. There’s nothing magical about the semantics of a statement or query, and there’s nothing magical about a reply that concisely responds to exactly what I’m looking for.

  10. 10. Philosopher Eric says:

    Hunt you seem correcton all accounts, as I’ve noticed in general. I did not mean to imply that intelligence requires consciousness. Furthermore I do suppose that hearing my blatant self promotion has now become tiring. Self promotion is indeed my purpose here, however, so as long as you keep writing interesting comments, there will be the potential for me to build comments from them of my own — interesting or not. I did not address your query this time, but perhaps in the future we will each come to similar answers for similar questions… simply given that neither of us has any use for “magic.”

  11. 11. John Davey says:

    “It’s not very likely that the hypothetical software (and we must remember that this is the merest speculation) would have even the most minimal forms of consciousness.”

    Speculation ? Why so cautious ? Is it speculative to assume that there isn’t a green monkey at the centre of Jupiter who eats cheese all day long and answers to the name of Kevin ? It’s possible that Kevin exists, but so unlikely we can say with conviction that he doesn’t : we simply have no reason to assume that Kevin exists.

    Likewise there is simply no reason to assume – none whatsoever, at all, in any sense – that software ‘thinks’. So we can simply say ‘software doesn’t think’. In the almost non-existent probability that this can be scientifically shown to be false ( I expect the “proof” of God’s existence to be about as likely) then we may have a reason to believe software does think.But until then, caution unrequired.

  12. 12. Damocles says:

    I go by the simple rule (as every little boy with a firelighter knows) :

    If you can’t torture it, it isn’t conscious.

Leave a Reply