wise menThere were a number of reports recently that a robot had passed ‘one of the tests for self-awareness’. They seem to stem mainly from this New Scientist piece (free registration may be required to see the whole thing, but honestly I’m not sure it’s worth it). That in turn reported an experiment conducted by Selmer Bringsjord of Rensselaer, due to be presented at the Ro-Man conference in a month’s time. The programme for the conference looks very interesting and the experiment is due to feature in a session on ‘Real Robots That Pass Human Tests of Self Awareness’.

The claim is that Bringsjord’s bot passed a form of the Wise Man test. The story behind the Wise Man test has three WMs tested by the king; he makes them wear hats which are either blue or white: they cannot see their own hat but can see both of the others. They’re told that there is at least one blue hat, and that the test is fair; to be won by the first WM who correctly announces the colour of his own hat. There is a chain of logical reasoning which produces the right conclusion: we can cut to the chase by noticing that the test can’t be fair unless all the hats are the same colour, because all other arrangements give one WM an advantage. Since at least one hat is blue, they all are.

You’ll notice that this is essentially a test of logic, not self awareness. If solving the problem required being aware that you were one of the WMs then we who merely read about it wouldn’t be able to come up with the answer – because we’re not one of the WMs and couldn’t possibly have that awareness. But there’s sorta,  kinda something about working with other people’s point of view in there.

Bringsjord’s bots actually did something rather different. They were apparently told that two of the three had been given a ‘dumbing’ pill that stopped them from being able to speak (actually a switch had been turned off; were the robots really clever enough to understand that distinction and the difference between a pill and a switch?); then they were asked ‘did you get the dumbing pill?’  Only one, of course, could answer, and duly answered ‘I don’t know’: then, having heard its own voice, it was able to go on to say ‘Oh, wait, now I know…!”

This test is obviously different from the original in many ways; it doesn’t involve the same logic. Fairness, an essential factor in the original version, doesn’t matter here, and in fact the test is egregiously unfair; only one bot can possibly win. The bot version seems to rest mainly on the robot being able to distinguish its own voice from those of the others (of course the others couldn’t answer anyway; if they’d been really smart they would all have answered ‘I wasn’t dumbed’, knowing that if they had been dumbed the incorrect conclusion would never be uttered). It does perhaps have a broadly similar sorta, kinda relation to awareness of points of view.

I don’t propose to try to unpick the reasoning here any further: I doubt whether the experiment tells us much, but as presented in the New Scientist piece the logic is such a dog’s breakfast and the details are so scanty it’s impossible to get a proper idea of what is going on. I should say that I have no doubt Ringsjord’s actual presentation will be impeccably clear and well-justified in both its claims and its reasoning; foggy reports of clear research are more common than vice versa.

There’s a general problem here about the slipperiness of defining human qualities. Ever since Plato attempted to define a man as ‘a featherless biped’ and was gleefully refuted by Diogenes with a plucked chicken, every definition of the special quality that defines the human mind seems to be torpedoed by counter-examples. Part of the problem is a curious bind whereby the task of definition requires you to give a specific test task; but it is the very non-specific open-ended generality of human thought you’re trying to capture. This, I expect, is why so many specific tasks that once seemed definitively reserved for humans have eventually been performed by computers, which perhaps can do anything which is specified narrowly enough.

We don’t know exactly what Bringsjord’s bots did, and it matters. They could have been programmed explicitly just to do exactly what they did do, which is boring: they could have been given some general purpose module that does not terminate with the first answer and shows up well in these circumstances, which might well be of interest; or they could have been endowed with massive understanding of the real world significance of such matters as pills, switches, dumbness, wise men, and so on, which would be a miracle and raise the question of why Bringsjord was pissing about with such trivial experiments when he had such godlike machines to offer.

As I say, though, it’s a general problem. In my view, the absence of any details about how the Room works is one of the fatal flaws in John Searle’s Chinese Room thought experiment; arguably the same issue arises for the Turing Test. Would we award full personhood to a robot that could keep up a good conversation? I’m not sure I would unless I had a clear idea of how it worked.

I think there are two reasonable conclusions we can draw, both depressing. One is that we can’t devise a good test for human qualities because we simply don’t know what those qualities are, and we’ll have to solve that imponderable riddle before we can get anywhere. The other possibility is that the specialness of human thought is permanently indefinable. Something about that specialness involves genuine originality, breaking the system, transcending the existing rules; so just as the robots will eventually conquer any specific test we set up, the human mind will always leap out of whatever parameters we set up for it.

But who knows, maybe the Ro-Man conference will surprise us with new grounds for optimism.

7 Comments

  1. 1. Erik says:

    I have no doubt that we will eventually stumble upon the right mechanism to produce something like human intelligence in machines. We are, after all, machines of a type, and somehow we’ve managed it. The question I have is whether or not such an achievement will provide us with any insight into how it works. Personally, I’m not exactly optimistic in this regard.

  2. 2. Jochen says:

    Well, I suppose one could argue that the test the robots were subjected to was in fact more closely connected to ‘self-awareness’ of a kind than the three wise men one: for the successful robot, deducing that it had not been dumbed down from the fact that it was able to speak requires at least some kind of recognition that it is the thing that just spoke, that is, some sense of ownership of its actions.

    But of course, that alone still doesn’t tell us anything: it’d be easy (well, theoretically; I suppose there could be great practical difficulties) to just hard-code the recognition of its own voice, and solve this problem in an expert system kind of way—the speech recognition throws an ‘I have spoken’-flag, then there’s some logic that boils down to ‘if I have spoken, I have not been dumbed down’, which leads to the ‘knowledge’ that it can’t have been dumbed down. None of this, of course, can be assessed from the outside—so the test is ultimately meaningless on its own.

    The same goes for the famous mirror test: it’d be (again, theoretically) easy to hard-code a robot with some self-recognition and the properties of a mirror, and then have it ‘deduce’ that the spot it sees on its reflection is in fact on its own body (or whatever equivalent one might dream up).

    This opens up the (more interesting, I think) question of when we can justifiably take such tests to indicate something nontrivial, i.e. that there is some actual form of ‘self-knowledge’ present in an animal passing the mirror test (as we, or at least the scientists performing those experiments, often do). Perhaps we need some good grounds on which we can expect that the requisite knowledge is not hard-coded in from the beginning, but instead derives from, or is an incidental by-product to, there actually being some form of self-modelling—that is, that it derives from the general-purpose capacities of some agent, rather than being explicitly provided to it for the special-purpose task of just passing that test.

    If we had such justification, then I think we could put some more confidence into these tests—if, for instance, we have a robot developed for some sort of general-purpose problem solving, involving somehow acting on the real world in an arbitrary way, which then incidentally happens to be found capable of passing the mirror test, then I think this would have a much higher credence than just producing a system for the express purpose of passing the mirror test.

    I sometimes think of a candle versus a tightly-focused laser in comparing the capacities of human minds with those of computers or robots: the candle, no matter how dim, is capable of illuminating a room evenly, while the laser only picks out one single spot—albeit with high brightness. In the same way, current computers are capable of executing narrowly defined tasks with breathtaking speed and precision, but fall apart as soon as they are brought out of their comfort zone. And no matter how many lasers you stick together, you won’t be able to replicate the candle’s performance.

  3. 3. Callan S. says:

    I still think if you stuck a bunch of humans in a test and told the testers they were speaking to robots and to rate the robots, a lot of the humans would be rated ‘poor’ or ‘unrealistic’.

    Unrealistic of me?

  4. 4. Peter says:

    I’ve always had reservations about the mirror test. A chimp sees an image of a chimp with a spot, and reaches to check whether it has a spot itself; does that prove that it recognises that the image is one of itself? Not really.
    It could work that way or the chimp might merely think: “Hey, someone’s putting spots on foreheads! Wonder if they got me?”
    On the other hand, a really bright chimp might think: “Hey, someone put a spot on me. No need to touch it or anything because I can see it clearly in the mirror.”

  5. 5. Peter says:

    Callan – I think something like that actually happens in the Loebner contest.

  6. 6. Jochen says:

    Peter, I agree about the mirror test. It also contains an unstated assumption about the dominance of vision. Certainly, a blind animal could be conscious, but it would fail the vision test; likewise, animals who do not emphasize vision (dogs perhaps) might fail without that carrying any implications regarding their sense of self. A dog might devise a smell-based test that we’d fail with flying colours…

  7. 7. Callan S. says:

    Off topic of me, but I think the mirror test is interesting in how animals are often quite aggressive to the mirror – their own initial approach is an aggressive one and of course the mirror repeats it back at them. Then they escalate to more aggressive shows, and so on.

    It’s interesting because it reminds me of how we react to other tribes/outgroups over history (and that second that just passed is history as well). In some ways we fail the mirror test.

Leave a Reply