Linespace
The future of intelligence?
Linespace

1 May 2005

home 

Jeff Hawkins

It's a sign of the times, perhaps, that Jeff Hawkins is not a computationalist. Creator of the PalmPilot and the Treo smart phone, he approaches the brain from a technical angle. The philosophy doesn't concern him unduly. He describes a conversation about qualia and reports his strategy: "Well, I don't see anything special going on, so if you do, I must be a zombie. But hey - that's OK."  You would expect someone with these leanings to be a diehard AI merchant, but not a bit of it. While he believes it's feasible to build artificial brains that work the way human ones do, he thinks they would be profoundly different from the computers we know and love, and expects entirely new kinds of silicon chip will be needed.

Actually, it's a bit of a misrepresentation to suggest his background is one of computers: he has also spent a good deal of time on neurology and in fact founded the Redwood Neuroscience Institute for the purpose. My impression is that the more you know about neurology, the less appealing the computer analogy becomes; but it seems that Jeff has always been of the view that the only way to reproduce the human brain was by close study of the methods the brain itself uses.

Linespace

All this, and his own theories of how the brain works, are set out in 'On Intelligence', a book he wrote with Sandra Blakeslee. (Sandra Blakeslee is a journalist who has specialised in this area: she previously helped Ramachandran write his notable book  'Phantoms in the Brain'. I should think her own thoughts would be well worth reading if she ever publishes them.) The book has a fully-developed website of its own here .

Hawkins, as I say, has little time for good old-fashioned AI: in fact, he endorses Searle's persuasive story about the Chinese Room. Much more to his taste are the approaches taken by neural networkers and connectionists, though they too, in his eyes, don't go far enough in modelling the actual neurology of the brain: he laments the fact that much of the connectionist effort has been expended on relatively simple three-layer networks. Both the AI and connectionist projects suffer the fatal flaw of focussing on behaviour: they see the brain in input-output terms, almost in the way the behaviourists did. While behaviour is obviously important, the real key to the mystery of intelligence is what goes on inside, as is obvious when you consider how much of our most important mental activity goes on while we are at rest, and without any direct connection to our behaviour. Imagining, remembering, and even planning can all go on quite effectively while we are sitting quietly in an armchair. Remembering, as it happens, turns out to be especially important in Hawkins' theory.

Linespace

Hawkins is inspired by Vernon Mountcastle's suggestion that all regions of the neocortex are essentially doing the same thing. Although there are some differences in the detail of the physical structure, and some regions have been shown by long-established research to be associated with particular senses or functions, the proposal is that at a fundamental level the processes are all the same. This is not such a strange idea as it sounds: it is supported by the remarkable flexibility of the cortex: although particular functions typically end up in particular places, a variety of evidence shows that in cases of injury or abnormal development, different regions can swap functions around quite readily.

If all the different parts of the neocortex are doing the same thing, then what is that thing? Hawkins proposes that the answer is prediction. To summarise with extreme brevity, the neocortex takes the incoherent swirl of data coming in from our senses and constructs invariant representations of objects in the world. These stable, invariant patterns allow us to recognise the same object from varying views or from partial information, and because they include patterns with a temporal element, they allow us by analogy to predict what is likely to happen next. In fact, only the things which fail to match up with our ongoing prediction of what is about to happen get referred upwards to higher, full-conscious regions for handling. Of course, memory is just as much a fundamental part of this process as prediction, and in fact what Hawkins is getting at is a wider power of extrapolation over time or space, rather than prediction understood narrowly. 

Linespace

This all seems pretty sensible, though not uncontroversial; moreover it is accompanied by a fairly full attempt at explaining in neurological terms how the process is actually implemented. Hawkins explains the layered structure of the cortex and the columns which cross it vertically. The key problem is how on earth the brain manages to construct invariant representations. Seeing the same person never involves exactly the same set of sensory inputs - we see different parts of them from different angles in different lights, yet the brain almost effortlessly recognises the same person.

One view is that the invariant representation is achieved only at the end of a complex process: Hawkins suggests that it is invariant representations all the way. Each layer of cortex deals with different representations and feeds its results up to the next, so that the invariant representation of lines and shapes builds up through hierarchical development to the representation of such complex objects as people at the highest level (actually Hawkins suggests that the memory-forming hippocampus should be seen as in a sense the highest level of cortex). This too seems sensible, but I wonder whether it is the whole story. There is an implication that every possible experience is already implicitly wired into the brain. There may be senses in which this is unobjectionably true: but I find it difficult to assess whether the unimaginably complex wiring of the cortex is really up to the indefinitely huge task involved. I certainly don't think it's clear that it isn't, but of all the steps in the account, this seems to me the one most in need of reinforcement.

Linespace

Hawkins goes on to consider the prospects for intelligent machines (I suppose we can't refer to them as artificial intelligence); in fact, he offers a rallying cry to young engineers: this is the moment to start work if you want to get in on the next big wave! He thinks that the limitations on connectivity suffered by silicon chips, compared with neurons, might be offset by the comparatively high speed of wires: the silicon neurons can share connections regulated like a phone network and still achieve similar results. It sounds reasonable, though there is an evident danger of underestimating the subtlety and complexity of real neurons - a danger Hawkins should particularly be sensitive to. In discussing the possible applications of such aritificial brains, he does seem at one point to slip back towards a computer-oriented view.One of his proposed applications is the control of cars (The attraction of this is doubtful, I should have thought, because his artifical minds will learn like humans and surely therefore be as fallible as biological chauffeurs?). He suggests that once a suitable artificial brain has learnt the art of driving, the factory will be able to run off any number of perfect copies. I'm not so sure - are these new chips going to be programmable? It is a leading property of computers that they can be put into any desired state directly, but that is not true of all machines. Some things have to be assembled in a certain intricate order: the desired state is only reachable through an appropriate chain of prior states. I have a feeling the Hawkins chips would each have to learn driving individually.

Linespace

All in all, though, Hawkins' views seem unusually practical and promising. He wraps up by offering eleven detailed and falsifiable predictions about the neurology of the human brain: these will be interesting to watch. 

Linespace

Earlier

home

later