Archive for December, 2010

Picture: Thomas J Watson. So IBM is at it again: first chess, now quizzes? In the new year their AI system ‘Watson’ (named after the founder of the company – not the partner of a system called ‘Crick’, nor yet of a vastly cleverer system called ‘Holmes’) is to be pitted against human contestants in the TV game Jeopardy: it has already demonstrated its remarkable ability to produce correct answers frequently enough to win against human opposition. There is certainly something impressive in the sight of a computer buzzing in and enunciating a well-formed, correct answer.

However, if you launch your new technological breakthrough on a TV quiz, rather than describing it in a peer-reviewed paper, or releasing it so that the world at large can kick it around a bit, I think you have to accept that people are going to suspect your discovery is more a matter of marketing than of actual science; and much of the stuff IBM has put out tends to confirm this impression. It’s long on hoopla, here and there it has that patronising air large businesses often seem to adopt for their publicity (“Imagine if a box could talk!”) and it’s rather short on details of how Watson actually works. This video seems to give a reasonable summary: there doesn’t seem to be anything very revolutionary going on, just a canny application of known techniques on a very large, massively parallel machine.

Not a breakthrough, then? But it looks so good! It’s worth remembering that a breakthrough in this area might be of very high importance. One of the things which computers have never been much good at is tasks that call for a true grasp of meaning, or for the capacity to deal with open-ended real environments. This is why the Turing test seems (in principle, anyway) like a good idea – to carry on a conversation reliably you have to be able to work out what the other person means; and in a conversation you can talk about anything in any way. If we could crack these problems, we should be a lot closer to the kind of general intelligence which at present robots only have in science fiction.

Sceptically, there are a number of reasons to think that Watson’s performance is actually less remarkable than it seems. First, a problem of fair competition is that the game requires contestants to buzz first in order to answer a question. It’s no surprise that Watson should be able to buzz in much faster than human contestants, which amounts to giving the machine the large advantage of having first pick of whatever questions it likes.

Second, and more fundamental, is Jeopardy really a restricted domain after all? This is crucial because AI systems have always been able to perform relatively well in ‘toy worlds’ where the range of permutations could be kept under control. It’s certainly true that the interactions involved in the game are quite rigidly stylised, eliminating at a stroke many of the difficult problems of pragmatics which crop up in the Turing Test. In a real conversation the words thrown at you might require all sorts of free-form interpretation, and have all kinds of conative, phatic and inferential functions; in the quiz you know they’re all going to be questions which just require answers in a given form.  On the other hand, so far as topics go, quiz questions do appear to be unrestricted ones which can address any aspect of the world (I note that Jeopardy questions are grouped under topics, but I’m not quite sure whether Watson will know in advance the likely categories, or the kinds of categories, it will be competing in). It may be interesting in this connection that Watson does not tap into the Internet for its information, but its own large corpus of data. The Internet to some degree reflects the buzzing chaos of reality, so it’s not really surprising or improper that Watson’s creators should prefer something a little more structured, but it does raise a slight question as to whether the vast database involved has been customised for the specifics of Jeopardy-world.

I said the quiz questions were a stylised form of discourse; but we’re asked to note in this connection that Jeopardy questions are peculiarly difficult: they’re not just straight factual questions with a straight answer, but allusive, referential, clever ones that require some intelligence to see through. Isn’t it all the more surprising that Watson should be able to deal with them? Well, no, I don’t think so:  it’s no more impressive than a blind man offering to fight you in the dark. Watson has no idea whether the questions are ‘straight’ or not; so long as enough clues are in there somewhere, it doesn’t matter how contorted or even nonsensical they might be; sometimes meanings can be distracting as well as helpful, but Watson has the advantage of not being bothered by that.

Another reason to withhold some of our admiration is that Watson is, in fact, far from infallible. It would be interesting to see more of Watson’s failures. The wrong answers mentioned by IBM tend to be good misses: answers that are incorrect, but make some sort of sense. We’re more used to AIs that fail disastrously, suddenly producing responses that are bizarre or unintelligible.  This will be important for IBM if they want to sell Watson technology, since buyers are much less likely to want a system that works well most of the time but abysmally every now and then.

Does all this matter? If it really is mainly a marketing gimmick, why should we pay attention? IBM make absolutely no claims that Watson is doing human-style thought or has anything approaching consciousness, but they do speak rather loosely of it dealing with meanings. There is a possibility that a famous victory by Watson would lead to AI claiming another tranche of vocabulary as part of its legitimate territory.  Look, people might say; there’s no point in saying that Watson and similar machines can’t deal with meaning and intentionality, any more than saying planes can’t fly because they don’t do it the way birds do. If machines can answer questions as well as human beings, it’s pointless to claim they can’t understand the questions: that’s what understanding is.  OK, they might say, you can still have your special ineffable meat-world kind of understanding, but you’re going to have to redefine that as a narrower and frankly less important business.

Picture: correspondent. Have you seen Kar Lee’s book Where are the zombies? I don’t quite agree with his conclusions, but I really like the way he sets out the problem.

Array tomography has revealed that the brain is even more staggeringly complex than we thought. I must confess that I understand the huge numbers involved about as well as a dog understands algebra, but this would seem to be bad news for brain simulation projects.

There’s an interesting video of Alva Noë on Edge.

You can now hear John Searle’s philosophy of mind lectures from Berkeley on iTunes (via). I’m afraid they haven’t been edited, so you can also hear Searle dealing with a lot of routine admin and bitching about the size of the room.

Picture: Damasio and the shark. Alison Gopnik, author of an excellent book about baby consciousness, has written an interesting review of  Self Comes To Mind, Antonio Damasio’s latest book. The review itself provides a useful brief sketch of the state of play on consciousness, but it dismisses Damasio’s book as a set of minor variations on what he’s already said: neither more up-to-date nor clearer than earlier books. He has, she suggests, jumped the shark.

In all fairness it may be worth repeating true conclusions: as we know David Hume’s Treatise of Human Nature ‘fell dead-born from the press’, attracting only a handful of buyers; it wasn’t until he had repeated himself in the Inquiry Concerning Human Understanding that his views even began to gain traction.

Are Damasio’s views worth another outing? They are distinctive in giving a fundamental role to the emotions. To me it seems most likely that our emotional systems have been overtaken by consciousness. Emotional systems – anger, love, fear – were what controlled our behaviour before we got consciousness, leading us into fight, flight, or the other f-word as appropriate.  They didn’t do a bad job:  animals still rely on them, and so, to a lesser extent, do we: steering our daily lives without emotional responses would be an intensive and hazardous business as we had to work out from first principles who to trust, what to eat, and so on. If there could truly be an emotionless race like the Vulcans of Star Trek it’s hard to see what would ever make them get out of bed in the morning. That point of view suggests that emotions are more like a substitute or a junior partner for consciousness than the stuff of which it’s made.

Emotions are certainly of interest, though,  and perhaps they are somewhat neglected as qualia – if they are qualia. Our emotional reactions are certainly accompanied – or is it constituted? – by vivid internal experience. Typically when we discuss qualia we talk about perception: the sight of redness, the sound of music, the smell of grass – and that may tempt us into considering them representational. Feelings of happiness or anger are not so directly about anything but they seem equally valid phenomenal experiences.

Gopnik throws in the suggestion that self-aware, self-conscious thought is just the icing on the cake and that the basis of consciousness is that state where we take in information without consciously reviewing  it (I imagine this fits with her view that adults develop a searchlight of focused attention in contrast to the widely scattered illumination of infant awareness); this is a view she attributes to David Hume (him again) and to Buddhists. In fact she thinks Hume might have got some of his views from Buddhism.  This is not implausible historically: quite apart from the detailed case she makes we know that popular medieval stories were versions of Buddhist texts, even leading to Gautama’s informal recognition as a Christian saint. But Hume of all people relies on no authority and describes in full detail the genesis of his own ideas in his own brain: I think it’s more plausible that radical scepticism sometimes produces similar results whether entertained by an Indian prince or a Scottish philosopher.

Anyway, I don’t think Gopnik’s sharp review persuades me that Self Comes To Mind isn’t worth reading: but it certainly convinces me that Gopnik’s own books are worth a look.