Lost in the Woods

Picture: David Gelernter. David Gelernter says that the project of strong artificial intelligence – creating a real, human-style consciousness – is lost in the woods, and that it is highly unlikely, only just short of impossible, that a real mind will ever be put together out of software. Although he believes strongly in the value of less ambitious AI projects, Gelernter has some form as a sceptic, having debated the subject last year with Ray Kurzweil, whose faith in the future of technology is of course legendary.

Though he mentions Jerry Fodor with approval Gelernter appears to be in the main a loyal follower of John Searle so far as philosophy is concerned, quoting the famous Chinese Room thought-experiment and repeating Searle’s line that a computer simulation of rain doesn’t make you wet. This is very well-trodden ground, where Searle’s allies and enemies long ago reached a kind of impasse, and new insights are unlikely to be found: moreover, with that ‘almost’ impossible, Gelernter actually pulls the punch in a way that Searle would certainly never do.

Gelernter suggests that AI has looked towards digital computation for two main reasons; first digital computers seemed in some ways like brains, and second, computation is the leading technology of our day, naturally seen as the best candidate for any theoretically daunting challenge. I think this understates the case a bit. Historically, as far back as Babbage and Turing, the creators of digital computers didn’t just design their machines and then notice a resemblance to human brains; they set out to make machines that did what brains do. It’s not unreasonable that machines designed in imitation of brains should be what we turn to when we want an artificial mind. Moreover, it’s not just that digital computers are the cutting edge of technology at the moment; they have an open-ended flexibility – they are universal machines after all – which is far more brain-like than any other artefact we can imagine. So while it may be right or wrong for AI practitioners to look towards digital computation, there are pretty good reasons for them to do so.

Gelernter’s main proposition, in any case, concerns what he calls the cognitive continuum. Consciousness is not just on or off, he observes: it may be closely focussed on a specific object, or it may be in a freer, looser state, in which the mind is more prone to wander. These looser states are not just the brain working inefficiently: they exhibit the mind’s ability to associate freely, think emotively, and come up with analogies. These kinds of thinking, especially analogical thinking, are crucial to the nature and success of human consciousness and they deserve more attention within AI. Gelernter himself describes these as only pre-theoretical ideas, but I think he’s certainly touched on an interesting area. There’s no doubt he’s right about consciousness having fuzzy edges, as it were. It would be great to have the range of possible different conscious states properly clarified, but I think that would involve more than a single spectrum – there are surely several different variables at work. To quote just a few examples, besides being focussed or diffuse, our thoughts may be explicit or implicit; they may operate in several different representational modes (thinking in pictures, in words, or neither, for example); they may be accompanied by second-order thoughts (awareness of our awareness) or not (though some would dispute that); they may be under our deliberate direction, roaming free, or directed by events in the world. They may even be operating in two or more different ways at once, as when we mentally plan a meeting while driving along a busy road.

Gelernter’s idea seems to be that when we are concentrating on something, our ideas are operating under close control and they do as they’re told: when our minds are wandering, the ideas can collide with each other more or less at random and produce fruitful results. I think he’s right to stress the importance of analogical thought, but I’m not altogether sure that it is uniquely enabled by a loose focus. Sometimes when we’re thinking hard about a specific problem we may cast about for ideas quite deliberately or look for telling analogies. The cases where a good idea comes into our mind when we’re thinking of something else, or of nothing, actually strike us as remarkable, and that may be why we remember and emphasise them more than the cases where a good analogy was put together by careful and conscious hard work.But even if we suppose that a relaxed state of mind is conducive to analogising, that is certainly only part of the story, perhaps the less interesting part. Good analogies don’t come from the random combination of ideas – that would be a hopelessly inefficient way of generating useful thoughts – they embody threads of meaning. I suspect that what is going on has something to do with ideas lurking in the mind at a subliminal level – perhaps the relevant neurons are firing too slowly for the thought to be conscious, but fast enough to predispose the mind towards a particular thought. When a related idea, or latent idea, comes along, it is enough to tip a fully-formed new thought into consciousness. I’m getting a bit pre-theoretical here myself, but at least I think it’s clear that the old problem of relevance has something to do with this.In fact see an analogy (ha) with iconic representation.

One of the many proposed routes for constructing a theory of meaningfulness is that it starts with simple iconic representation and builds up from there. If you want someone to think of a man, show them a man, goes the theory: if you can’t arrange for a man to be present, do a drawing. The marks on the paper share certain properties of shape and appearance with a real man, and so they call the same thought to mind. Stylise your man a bit, and you have a symbol and before you know where you are you’ll be covering your pyramid with votive inscriptions. But it’s not as simple as that. If I paint a stick man on the wall, it might mean ‘man’, but it might mean ‘I was here’, or ‘painting’ (or of course, ‘gentlemen’s toilet’). For iconic representation to work, we have to pick out which of the many properties of the icon are relevant: humans are very good at this, but digital computers can’t really do it at all. In the same way, a good analogy links two things in different realms whose relevant properties are the same while their other properties are entirely different.

You would perhaps expect Gelernter to make this analogical thinking another basis for his scepticism about strong AI, but in fact he believes the process could and should be simulated by computer, giving us a new generation of vastly more useful artificial intelligence working to principles much more like those of the human brain. It would look a lot more like human consciousness but, he says, it would not be real – just a simulation. I think he is bound to run into new varieties of the old problems with relevance which in various shapes have dogged the progress of AI for many years, but it’s interesting to contemplate the strange situation which would arise if he succeeded. Hypothetically, Gelernter’s machine would be talking and behaving like a conscious human: many, I dare say, would be happy to accept that it had consciousness of some sort – but not Gelernter.