Picture: Lucy. Bitbucket Steve Grand has written another book, “Growing up with Lucy” about his long-term robot-building project: partly, it seems, because he needs the money, but also because he has some interesting progress to report.

Grand, as you probably know, became famous as the brains behind the computer game ‘Creatures’, which used a far more sophisticated version of ‘artificial life’ than any of its predecessors or competitors (or successors, I suspect). In some ways, however, Grand represents an older tradition of computer games design. There was an era when computers were readily available but processing power, storage, and graphics were still severely limited. In those days it was relatively normal for an individual with a good idea to run up a world-beating game alone in his bedroom or garage. When better graphics came along and computer games started emulating feature films, all that changed. But it is in the same sort of spirit that Grand has set out to build a better robot, believing that one good man with a sound idea can achieve more than a vast instituion governed by committees. He derides those who claim that the geometric increase in computer power predicted by ‘Moore’s Law’ means that true machine consciousness is only a matter of time.

What exactly is he trying to achieve? Not a human level of consciousness, though he appears to have set his sights on an orang-utan, surely among the most thoughtful and reflective of animals. He believes that building a physical robot, and one which works in the same way as a real animal (its voice generated by real mechanical mouth/throat action rather than a chip, for example) is the most enlightening way to carry out this exercise, and he has some interesting ideas about how neural” systems might be set up in such a way that they structure and organise themselves merely through experience.

Blandula I don’t really know why someone with Grand’s programming background is insisting on building a physical robot rather than simulating the whole thing. So long as it was a sufficiently scrupulous simulation, I don’t see any issues of principle. Having to actually build the mechanical and electronic apparatus means he might easily get held up by practical engineering problems which are really nothing to do with the key issues.

There seem to me to be a few contradictions (or tensions, at least) in what he says. He says it’s no good building bits of an organism in the hope of later patching them together – you have to have the whole thing for it to work properly. Yet he’s missing out the higher levels of consciouness, which are surely deeply implicated in vision, for example (Grand himself insists that vision and other mental processes are a mixture of bottom-up and top-down pathways, which he refers to as ‘Yin’ and ‘Yang’). He scorns AI gurus who think they can get to consciousness by gradually improving what they’ve already got – he says it’s like trying to get to the Moon by gradually getting better at jumping – but at the same time he seems to think that a system of servos for a model glider gives him the beginning of a path towards genuinely mental processes. Worst of all, though, he denies trying to produce real human-level consciousness, but puts a grotesque rubber mask, red wig, and dummy second eye on his robot, and constantly refers to it as his daughter.

Bitbucket I knew you were going to take this line! Calling Lucy his daughter is really just a joke, for heaven’s sake, but it does have the value of being provocative, helping to attract attention and (perhaps) funding. And I think it’s reasonable to claim that it helps him if he thinks of Lucy in personal terms – it sets the right context.

Blandula That line of reasoning reminds me vividly of a book I once read by an old-school ventriloquist. He insisted that you should always refer to the dummy as your ‘partner’, and treat it as a person, if you wanted your performance on stage to have real conviction. The difference is that ventriloquists admit they are trying to create an illusion! I’m particularly unhappy about that second eye. It actually seems to be a ping-pong ball or something, and looks nothing like the functioning camera eye. But on the dust-jacket of the book, it has been photoshopped to look exactly like the camera eye, reducing the repellent Frankenstein quality which I’m afraid Lucy certainly has. Retouching your pictures like this is perilously close to the line that separates window-dressing from actual deception.

The point about attracting attention is all very well, but as well as getting publicity the mask raises suspicions which actually harm Grand’s credibility. Without it, people may think, it would be all too apparent that what Grand describes as one of the most advanced research robots in existence could also be described as a camera with non-functional arms. Of course, it may be that both descriptions are valid…

Bitbucket I’m sorry, I’m just not going to waste any more time arguing about presentation when we could be talking about Grand’s actual ideas.

He rejects the idea that there is a straightforward ‘algorithm of thought’ and also most current connectionist models, which he says bears no resemblance to real neural structures. He does say that he thinks you need a robot complete enough to tackle something other than ‘toy’ problems, but I don’t accept that that conflicts with his approach to Lucy, who – alright, which – is just a test-bed for ideas anyway.

His working hypothesis is that there is a basic standard pattern of neuronal organisation which is used over and over again for slightly different functions in different parts of the brain. So instead of bolting together a set of incompatible specialised modules, the task is to find a single ‘component’ that can be endlessly re-customised. Basically, this comes down to neural maps which he thinks can carry out co-ordinate transforms, thereby controlling behaviour in the same sort of way as – yes – servos on an automated model glider. The point here is that the servos effectively try to make a particular quantity – flap angle, rate of roll, angle of bank, and so on up the hierarchy – match the one specified by a higher-level controller. His argument is not the naive one you suggest, but he does point out that if you had an automatic glider which had maximum altitude as its target value, there would be an analogy with a ‘desire’ or ‘intention’ on the part of the glider to stay as high as possible.

Blandula Yes – on the same kind of level as the thermostat’s ‘desire’ to keep the room at the right temperature, or for that matter, the ‘intention’ of a broken glider to plummet crazily to earth.

Bitbucket This is about as close as Grand gets to the big issues of consciousness. Somewhere near or at the top of conscious organisation he speculates that there is a map which sets out a kind of ideal target state for the organism. This could be seen as representing its desires or drives; another of his angles is that a key mental task is predicting events; bringing the prediction map and the desire map into line could be what it’s all about. He remarks that a saccade, an automatic jumping of the eyes towards a sudden movement or other point of interest in the visual field, is always interpreted as a conscious decision, with the strong implication that other forms of consciousness only ‘seem’ to be in control. Taken together with scepticism about free will, I think this gives us a fair idea of his philosophical position, even without further discussion. That’s not to say there isn’t a lot of interesting amd relevant stuff here, though. Grand describes the neuronal structure of the brain and points out how half or more than half the signals in sensory areas are actually going in a top-down direction; perception, he suggests, is anything but passive. He reckons there is a kind of tripod structure of paired up and down paths. He has some rather speculative ideas about how neurons might spontaneously organise themselves into ‘pinwheel’ structures that help to identify edge orientation and he describes a method which the brain just might be able to use to identify shapes regardless of size and orientation. All this is pretty speculative, but it has a new and promising feel.

Blandula I don’t know about that. It seems to me some of this stuff is unlikely (there are a few rather serious problems about mental representations of people’s desires which don’t arise in the case of automatic glider-steering systems) and the rest is not very original. The idea that neuronal maps could do co-ordinate transforms, and thereby make eyes saccade towards a point of interest, and arms reach out towards it, was covered by Paul Churchland in a fairly well-known paper in Mind in 1986 – ‘Some Reductive Strategies in Cognitive Neurobiology’, where he also suggested the same kind of maps might have other interesting uses. But saccading eyes are a favourite trick of AI robots – Rodney Brooks’ Cog, for example. I’m sorry to come back to this point again, but it’s hard not to feel that eyes that follow you round the room are popular because they make the robot ‘seem alive’ in the same way as Lucy’s mask. It looks as if the AI people are so tired of getting nowhere that they are tempted towards shallow but crowd-pleasing tricks just to get some popular approval. Look at Kismet, another robot from Brooks’ laboratory, mainly the work of Cynthia Breazeal: it has big eyes, lips, and other facial features which it uses to interact with people, gurgling at them, looking them in the eye, and so on. It’s supposed to have internal states analogous to emotions, and it apparently requires fifteen different computers, but frankly it seems to me to bear an embarassingly clear resemblance to a Furby.

Bitbucket Scoff all you like: I think an edge of eccentricity is part of Steve Grand’s appeal. You never know quite what he might come up with.

Blandula I agree that the ‘mad inventor’ personality has a certain charm, and perhaps a certain usefulness. It has its downside too, though. I see that Grand has funding as a NESTA ‘Dreamtime’ Fellow . It’s nice to be called a dreamer in some ways, but it’s an ambivalent kind of compliment…

One Comment

Leave a Reply