Picture: iCub. More about babies – of a sort. You may have seen reports that Plymouth University, with support from many other institutions, has won the opportunity to teach a baby robot to speak. The robot in question is the iCub, and the project is part of Italk, funded by the EU under its Seventh Framework Programme, to the tune of £4.7 million (you can’t help wondering whether it wouldn’t have been value for money to have slipped Steve Grand half a million while they were at it…).

The gist of it seems to be that next year the people at Plymouth will get the iCub to engage in various dull activities like block-stacking (a perennial with both babies and AI) and try to introduce speech communication about the tasks. It is meant to be learning in a way far closer to the typical human experience than anything attempted before. Unfortunately, I haven’t been able to find any clear statement of how they expect the language skills to work, though there is quite a lot of technical detail available about the iCub. This is evidently a pretty splendid piece of kit, although the current model has a mask for a face, which means none of the potent interactive procedures which depend on facial expression, as explored by Cynthia Brezeal and others, will be available. This is a shame, since real babies do use face recognition and smiles to get more feedback out of adults.

In one respect the project has an impeccable background, since Alan Turing, in the famous paper which arguably gave rise to modern Artificial Intelligence, speculated that a thinking robot might have to begin as a baby and be educated. But it seems a tremendously ambitious undertaking. If we are to believe Chomsky, the human language acquisition device is built in – it has to be, since human babies get such poor input, with few corrections and limited samples, yet learn at a fantastic speed. They just don’t get enough information about the language around them to be able to reverse-engineer its rules; so they must in fact simply be customising the setting of their language machine and filling up its vocabulary stores. The structures of real-world languages arguably support this point of view, since the variations in grammar seem to fall within certain limited options, rather than exploiting the full range of logical possibilities. If this is all true, then a robot which doesn’t have a built-in language facility is not going to get very far with talking just by playing with some toys.Of course Chomsky is not, as it were, the only game in town: a more recent school of thought says that by treating language as a formal code, and assuming that babies have to learn the rules before they can work out what people mean, Chomsky puts the cart before the horse; actually, it’s because babies can see what people mean that they can crack the code of grammar so efficiently.

That’s a more congenial point of view for the current project, I imagine, but it raises another question. On this view, babies are not using an innate language module, but they are using an innate ability to understand, to get people’s drift. I don’t think the Plymouth team have worked out a way of building understanding in beforehand (that would be a feat well worth trumpeting in its own right), so is it their expectation that the iCub will acquire understanding through training? Or are their aims somewhat lower?

It seems a key question to me: if they want the robot to understand what it’s saying, they’re aiming high and it would be good to know what the basis for their optimism is (and how they’re going to demonstrate the achievement). If not, if they’re merely aiming for a basic level of performance without worrying about understanding (and the selection of experiments does rather point in that direction), the project seems a bit underwhelming. Would this be any different, fundamentally, from what Terry Winograd was doing, getting on for forty years ago (albeit with SHRDLU, a less charismatic robot)?


  1. 1. Tadayuki Akimoto says:

    Dear Sir,

    We’d like to introduce the MLAS(Multi-Language-Acquiring-System),the system is developed by Prof. Suga of Japan Women’s University and may be regarded as capable of attaining the level of acquisition of a 4-5 years old child now, we can show you the performance of MLAS is demonstrated for English and Japanese.Our target will be had a capability of the level of acquisition of 12 years old in near the future. Please see the paper writing by Prof. Suga is attached.

    We had the first demo at Prof. Suga’s office on September 6th with Dr. Poh came from England.
    It took 3 hours for demonstration and discussing about MLAS, and Dr. Poh understood what MLAS functions had and he would spread of information to his colleagues. Prof. Suga gave him the response of all his questions.

    Best regards,

    Tadayuki Akimoto
    MLAS Japan


    Tadayuki Akimoto

    MLAS Japan


    3-19-3 NishiKoiwa








  2. 2. Lloyd Rice says:

    With respect to learning language, I’d like to take a slightly different view. You have said that they (babies) don’t get enough info to reverse-engineer the rules. But they do get a huge amount of info in ways that can be averaged and this seems to be what we are amazingly good at doing. I do not believe the “rules” need to be pre-structured (via DNA) in quite the way that Chomsky would say. I think it is more of a general outline of verb structures, mush as Steven Pinker has discussed. From there, we take the averages of correlations between incoming sounds and the totality of situational content (the semantics), and build up the structures we need to be able to use language.

    I don’t know how the iCub works, but I am hoping to see an experiment along the lines suggested. I am not at all confident that we yet know enough to build the required averager, though.

  3. 3. Lloyd Rice says:

    And, of course, you are quite correct that the robot would need to “understand the situation”.

Leave a Reply