Picture: ducks. The New Scientist, under the title ‘Tests that show machines closing in on human abilities’ has a short review of some different ways in which robotic or computer performance is being tested against human achievement. The piece is not really reporting fresh progress in the way its title suggests  – the success of Weizenbaum’s early chat-bot Eliza is not exactly breaking news,  for example – but I think the overall point is a good one. In the last half of the last century, it was often assumed that progress towards AI would see all the barriers come down at once: as soon as we had a robot that could meet Turing’s target of a few minutes’ successful small-talk, fully-functioning androids would rapidly follow.

In practice, Turing’s test has not been passed as he expected, although some stalwart souls continue to work on it. But we have seen the overall problem of human mentality unpicked into a series of substantial, but lesser challenges. Human levels of competence remain the target, but competence in different narrow fields, with no expectation that solving the problem in one area solves it in all of them.

THe piece ends with a quote from Stevan Harnad which suggests he clings to the old way of looking at this:

“If a machine can prove indistinguishable from a human, we should award it the respect we would to a human”

That may be true, in fact, but the question is, indistinguishable in which respects? People often quote a  particular saying in this respect: if it walks like a duck and quacks like a duck, it’s a duck.  Actually, even in the case of ducks this isn’t as straightforward as it might seem since, other wildfowl may be ducklike in some respects. Given a particular bird, how many of us could say with any certainty whether it were a Velvet Scoter, a White-winged Coot – or just a large sea-duck? But it’s worse than that. The Duck Law, as we may call it, works fairly well in real life; but that’s because as a matter of fact there aren’t all that many anatine entities in the world other than outright ducks. If there were a cunning artificer who was bent on turning out false, mechanical ducks like the legendary one made by Vaucanson, which did not merely walk and quack like a duck, but ate and pooped like one, we should need a more searching principle.  When it comes to the Turing Test, that is pretty much the position we find ourselves in.

There is, of course, a more rigorous version of Duck Law which is intellectually irreproachable, namely Leibniz’s Law. Loosely, this says that if two object share the same properties, they are the same. The problem is, that in order to work, Leibniz’s law has to be applied in the most rigorous fashion.  It requires that all properties must be the same. To be indistinguishable from a human being in this sense means literally indistinguishable, ie having human guts, a human mother and so on.

So, in which respects must a robot resemble a human being in order to be awarded the same respect as a human? It now seems unlikely that a machine will pass the original Turing Test soon; but even if it did, would that really be enough?  Just looking like a human, even in flexible animation which reproduces the pores of the skin and typical human movements, is clearly not enough. Nor is being able to improvise jazz tunes seamlessly with a human player. But these things are all significant achievements nevertheless. Perhaps this is the way we make progress.

Or possibly, at some stage in the future, someone will notice that if he were to bolt together a hundred different software modules and some appropriate bits of hardware, all by then readily available, he could theoretically produce a machine able to do everything a human can do; but he won’t by then see any point in actually doing it.

3 Comments

  1. 1. Paul Bello says:

    I think a better question to ask is how humans recognize each other as humans, which we’re pretty good at doing by 6 months of age, well before we can answer questions. We use animacy cues (i.e. self-propelled biological motion), observational data (i.e. typically textured & colored skin), and facial features corresponding to certain classes of emotions. Non-verbal looking-time experiments have recently started to suggest that children as young as 15 months might have some pre-conceptual understanding of mental states that compel them to be surprised when other agents don’t act in accordance to “beliefs” that the infants assume them to have. As with (almost) everything in AI, we’ve put the cart before the horse in demanding (on the TT account) that machines be able to perform one of the pinnacle feats of human-level intelligence: engaging in dialogue. While robust language use surely can be seen as a sufficient condition for human-level AI, it isn’t by any means a necessary one.

  2. 2. Michael Baggot says:

    It seems to me that the criteria for attributing humanity to others (regardless whether animate or inanimate) is not intellectual competence ala the Turing test but the ability to engage in symbiotic emotional relationships. This is what we do with other humans and how we come to identify with them; specifically, we stroke them emotionally and they in turn stroke us. We may make decisions about how or whether to do this using analytical intellignece but the underlying needs are not only far more elemental but, in fact, totally alien to any ordinary, inanimate computational paradigm.

  3. 3. Micha says:

    I recommend deleting this, I just have no other way to reach this blog’s author.

    I thought you would be interested in “Magenta Ain’t a Color” by Liz Elliot. The author shows that magenta is not present in the spectrum, and in fact is a more pure quale than other colors. “Magenta is the evidence that the brain takes option b – it has apparently constructed a colour to bridge the gap between red and violet, because such a colour does not exist in the light spectrum. Magenta has no wavelength attributed to it, unlike all the other spectrum colours.” She actually uses the word quale, that’s not my interpretation.

    -Micha
    micha@aishdas.org

Leave a Reply