Selves without concepts

conceptlessThinking without concepts is a strange idea at first sight; isn’t it always the concept of a thing that’s involved in thought, rather than the thing itself? But I don’t think it’s really as strange as it seems. Without looking too closely into the foundations at this stage, I interpret conceptual thought as simply being one level higher in abstraction than its non-conceptual counterpart.

Dogs, I think, are perfectly capable of non-conceptual thinking. Show them the lead or rattle the dinner bowl and they assent enthusiastically to the concrete proposal. Without concepts, though, dogs are tied to the moment and the actual; it’s no good asking the dog whether it would prefer walkies tomorrow morning or tomorrow afternoon; the concept cannot gain a foothold – nothing more abstract than walkies now can really do so. That doesn’t mean we should deny a dog consciousness – the difference between a conscious and unconscious dog is pretty clear – only to certain human levels. The advanced use of language and symbols certainly requires concepts, though I think it is not synonymous with it and conceptual but inexplicit thought seems a viable option to me. Some, though, have thought that it takes language to build meaningful self-consciousness.

Kristina Musholt has been looking briefly at whether self-consciousness can be built out of purely non-conceptual thought, particularly in response to Bermudez, and summarising the case made in her book Thinking About Oneself.
Musholt suggests that non-conceptual thought reflects knowledge-how rather than knowledge-that; without quite agreeing completely about that we can agree that non-conceptual thought can only be expressed through action and so is inherently about interaction with the world, which I take to be her main pointer.

Following Bermudez it seems we are to look for three things from any candidate for self-awareness; namely,

(1) non-accidental self-reference,
(2) immediate action relevance, and
(3) immunity to error through misidentification.

That last one may look a bit scary; it’s simply the point that you can’t be wrong about the identity of the person thinking your own thoughts. I think there are some senses in which this ain’t necessarily so, but for present purposes it doesn’t really matter. Bermudez was concerned to refute those who think that self-consciousness requires language; he thought any such argument collapses into circularity; to construct self-consciousness out of language you have to be able to talk about yourself, but talking about yourself requires the very self-awareness you were supposed to be building.

Bermudez, it seems, believes we can go elsewhere and get our self-awareness out of something like that implicit certainty we mentioned earlier.  As thought implies the thinker, non-conceptual thoughts will serve us perfectly well for these purposes. Musholt, though broadly in sympathy, isn’t happy with that. While the self may be implied simply by the existence of non-conceptual thoughts, she points out that it isn’t represented, and that’s what we really need. For one thing, it makes no sense to her to talk about immunity from error when it applies to something that isn’t even represented – it’s not that error is possible, it’s that the whole concept of error or immunity doesn’t even arise.

She still wants to build self-awareness out of non-conceptual thought, but her preferred route is social. As we noted she thinks non-conceptual thought is all about interaction with the world, and she suggests that it’s interaction with other people that provides the foundation for our awareness ourselves. It’s our experience of other people that ultimately grounds our awareness of ourselves as people.

That all seems pretty sensible. I find myself wondering about dogs, and about the state of mind of someone who grew up entirely alone, never meeting any other thinking being. It’s hard even to form a plausible thought experiment about that, I think. The human mind being what it is, I suspect that if no other humans or animals were around inanimate objects would be assigned imaginary personalities and some kind of substitute society cobbled up. Would the human being involved end up with no self-awareness, some strangely defective self-awareness (perhaps subject to some kind of dissociative disorder?), or broadly normal? I don’t even have any clear intuition on the matter.

Anyway, we should keep track of the original project, which essentially remains the one set out by Bermudez. Even if we don’t like Musholt’s proposal better than his, it all serves to show that there is actually quite a rich range of theoretical possibilities here, which tends to undermine the view that linguistic ability is essential. To me it just doesn’t seem very plausible that language should come before self-awareness, although I think it does come before certain forms of self-awareness. The real take-away, perhaps, is that self-awareness is a many-splendoured thing and different forms of it exist on all the three levels mentioned and surely more, too. This conclusion partly vindicates the attack on language as the only basis for self-awareness, undercutting Bermudez’s case for circularity. If self-awareness actually comes in lots of forms, then the sophisticated, explicit, language-based kind doesn’t need to pull itself up by its bootstraps, it can grow out of less explicit versions.

Anyway, Musholt has at least added to our repertoire a version of self-awareness which seems real and interesting – if not necessarily the sole or most fundamental version.

Slippery Humanity

wise menThere were a number of reports recently that a robot had passed ‘one of the tests for self-awareness’. They seem to stem mainly from this New Scientist piece (free registration may be required to see the whole thing, but honestly I’m not sure it’s worth it). That in turn reported an experiment conducted by Selmer Bringsjord of Rensselaer, due to be presented at the Ro-Man conference in a month’s time. The programme for the conference looks very interesting and the experiment is due to feature in a session on ‘Real Robots That Pass Human Tests of Self Awareness’.

The claim is that Bringsjord’s bot passed a form of the Wise Man test. The story behind the Wise Man test has three WMs tested by the king; he makes them wear hats which are either blue or white: they cannot see their own hat but can see both of the others. They’re told that there is at least one blue hat, and that the test is fair; to be won by the first WM who correctly announces the colour of his own hat. There is a chain of logical reasoning which produces the right conclusion: we can cut to the chase by noticing that the test can’t be fair unless all the hats are the same colour, because all other arrangements give one WM an advantage. Since at least one hat is blue, they all are.

You’ll notice that this is essentially a test of logic, not self awareness. If solving the problem required being aware that you were one of the WMs then we who merely read about it wouldn’t be able to come up with the answer – because we’re not one of the WMs and couldn’t possibly have that awareness. But there’s sorta,  kinda something about working with other people’s point of view in there.

Bringsjord’s bots actually did something rather different. They were apparently told that two of the three had been given a ‘dumbing’ pill that stopped them from being able to speak (actually a switch had been turned off; were the robots really clever enough to understand that distinction and the difference between a pill and a switch?); then they were asked ‘did you get the dumbing pill?’  Only one, of course, could answer, and duly answered ‘I don’t know’: then, having heard its own voice, it was able to go on to say ‘Oh, wait, now I know…!”

This test is obviously different from the original in many ways; it doesn’t involve the same logic. Fairness, an essential factor in the original version, doesn’t matter here, and in fact the test is egregiously unfair; only one bot can possibly win. The bot version seems to rest mainly on the robot being able to distinguish its own voice from those of the others (of course the others couldn’t answer anyway; if they’d been really smart they would all have answered ‘I wasn’t dumbed’, knowing that if they had been dumbed the incorrect conclusion would never be uttered). It does perhaps have a broadly similar sorta, kinda relation to awareness of points of view.

I don’t propose to try to unpick the reasoning here any further: I doubt whether the experiment tells us much, but as presented in the New Scientist piece the logic is such a dog’s breakfast and the details are so scanty it’s impossible to get a proper idea of what is going on. I should say that I have no doubt Ringsjord’s actual presentation will be impeccably clear and well-justified in both its claims and its reasoning; foggy reports of clear research are more common than vice versa.

There’s a general problem here about the slipperiness of defining human qualities. Ever since Plato attempted to define a man as ‘a featherless biped’ and was gleefully refuted by Diogenes with a plucked chicken, every definition of the special quality that defines the human mind seems to be torpedoed by counter-examples. Part of the problem is a curious bind whereby the task of definition requires you to give a specific test task; but it is the very non-specific open-ended generality of human thought you’re trying to capture. This, I expect, is why so many specific tasks that once seemed definitively reserved for humans have eventually been performed by computers, which perhaps can do anything which is specified narrowly enough.

We don’t know exactly what Bringsjord’s bots did, and it matters. They could have been programmed explicitly just to do exactly what they did do, which is boring: they could have been given some general purpose module that does not terminate with the first answer and shows up well in these circumstances, which might well be of interest; or they could have been endowed with massive understanding of the real world significance of such matters as pills, switches, dumbness, wise men, and so on, which would be a miracle and raise the question of why Bringsjord was pissing about with such trivial experiments when he had such godlike machines to offer.

As I say, though, it’s a general problem. In my view, the absence of any details about how the Room works is one of the fatal flaws in John Searle’s Chinese Room thought experiment; arguably the same issue arises for the Turing Test. Would we award full personhood to a robot that could keep up a good conversation? I’m not sure I would unless I had a clear idea of how it worked.

I think there are two reasonable conclusions we can draw, both depressing. One is that we can’t devise a good test for human qualities because we simply don’t know what those qualities are, and we’ll have to solve that imponderable riddle before we can get anywhere. The other possibility is that the specialness of human thought is permanently indefinable. Something about that specialness involves genuine originality, breaking the system, transcending the existing rules; so just as the robots will eventually conquer any specific test we set up, the human mind will always leap out of whatever parameters we set up for it.

But who knows, maybe the Ro-Man conference will surprise us with new grounds for optimism.