Meh-bots

Do robots care? Aeon has an edited version of the inaugural Margaret Boden Lecture, delivered by Boden herself. You can see the full lecture above. Among other things, she tells us that the robots are not going to take over because they don’t care. No computer has actual motives, the way human beings do, and they are indifferent to what happens (if we can even speak of indifference in a case where no desire or aversion is possible).

No doubt Boden is right; it’s surely true at least that no current computer has anything that’s really the same as human motivation. For me, though, she doesn’t provide a convincing account of why human motives are special, and why computers can’t have them, and perhaps doesn’t sufficiently engage with the possibility that robots might take over the world (or at least, do various bad out-of-control things) without having human motives, or caring what happens in the fullest sense. We know already that learning systems set goals by humans are prone to finding cheats or expedients never envisaged by the people who set up the task; while it seems a bit of a stretch to suppose that a supercomputer might enslave all humanity in pursuit of its goal of filling the world with paperclips (about which, however, it doesn’t really care), it seems quite possible real systems might do some dangerous things. Might a self-driving car (have things gone a bit quiet on that front, by the way?) decide that its built-in goal of not colliding with other vehicles can be pursued effectively by forcing everyone else off the road?

What is the ultimate source of human motivation? There are two plausible candidates that Boden doesn’t mention. One is qualia; I think John Searle might say, for example, that it’s things like the quake of hunger, how hungriness really feels, that are the roots of human desire. That nicely explains why computers can’t have them, but for me the old dilemma looms. If qualia are part of the causal account, then they must be naturalisable and in principle available to machines. If they aren’t part of the causal story, how do they influence human behaviour?

Less philosophically, many people would trace human motives to the evolutionary imperatives of survival and reproduction. There must be some truth in that, but isn’t there also something special about human motivation, something detached from the struggle to live?

Boden seems to rest largely on social factors, which computers, as non-social beings, cannot share in. No doubt social factors are highly important in shaping and transmitting motivation, but what about Baby Crusoe, who somehow grew up with no social contact? His mental state may be odd, but would we say he has no more motives than a computer? Then again, why can’t computers be social, either by interacting with each other, or by joining in human society? It seems they might talk to human beings, and if we disallow that as not really social, we are in clear danger of begging the question.

For me the special, detached quality of human motivation arises from our capacity to imagine and foresee. We can randomly or speculatively envisage future states, decide we like or detest them, and plot a course accordingly, coming up with motives that don’t grow out of current circumstances. That capacity depends on the intentionality or aboutness of consciousness, which computers entirely lack – at least for now.

But that isn’t quite what Boden is talking about, I think; she means something in our emotional nature. That – human emotions – is a deep and difficult matter on which much might be said; but at the moment I can’t really be bothered…

 

That’s you all over…

all overAn interesting study at Vanderbilt University (something not quite right about the brain picture on that page) suggests that consciousness is not narrowly localised within small regions of the cortex, but occurs when lots of connections to all regions are active. This is potentially of considerable significance, but some caution is appropriate.

The experiment asked subjects to report whether they could see a disc that flashed up only briefly, and how certain they were about it. Then it compared scans from occasions when awareness of the disc was clearly present or absent. The initial results provided the same kind of pattern we’ve become used to, in which small regions became active when awareness was present. Hypothetically these might be regions particularly devoted to disc detection; other studies in the past have found patterns and regions that appeared to be specific for individual objects, or even the faces of particular people.

Then, however, the team went on to assess connectedness, and found that awareness was associated with many connections to all parts of the cortex. This might be taken to mean that while particular small bits of brain may have to do with particular things in the world, awareness itself is something the whole cortex does. This would be a very interesting result, congenial to some, and it would certainly affect the way we think about consciousness and its relation to the brain.

However, we shouldn’t get too carried away too quickly.  To begin with, the study was about awareness of a flashing disc; a legitimate example of a conscious state, but not a particularly complex one and not necessarily typical of distinctively human types of higher-level conscious activity. Second, I’m not remotely competent to make any technical judgement about the methods used to assess what connections were in place, but I’d guess there’s a chance other teams in the field might have some criticisms.

Third, there seems to be scope for other interpretations of the results. At best we know that moments of disc awareness were correlated with moments of high connectedness. That might mean the connectedness caused or constituted the awareness, but it might also mean that it was just something that happens at the same time. Perhaps those narrow regions are still doing the real work: after all, when there’s a key political debate the rest of the country connects up with it; but the debate still happens in a single chamber and would happen just the same if the wider connectivity failed. It might be that awareness gives a wide selection of other regions a chance to chip in, or to be activated in turn, but that that is not an essential feature of the experience of the disc.

For some people, the idea of consciousness bring radically decentralised will be unpalatable. To them, it’s a functional matter which more or less has to happen in a defined area. OK, that area could be stretched out, but the idea that merely linking up disparate parts of the cortex could in itself bring about a conscious state will seem too unlikely to be taken seriously. For others, who think the brain itself is too narrow an area to fully contain consciousness, the results will hardly seem to go far enough.

For myself, I feel some sympathy with the view expressed by Margaret Boden in this interview, where she speaks disparagingly of current neuroscience being mere ‘natural history’ – we just don’t have enough of a theory yet to know what we’re looking at. We’re still in the stage where we’re merely collecting facts, findings that will one day fit neatly into a proper theoretical framework, but at the moment don’t really prove or disprove any general hypotheses. To put it another way, we’re still collecting pieces of the jigsaw puzzle but we don’t have any idea what the picture is. When we spot that, then perhaps the pieces will all… connect.