Meh-bots

Do robots care? Aeon has an edited version of the inaugural Margaret Boden Lecture, delivered by Boden herself. You can see the full lecture above. Among other things, she tells us that the robots are not going to take over because they don’t care. No computer has actual motives, the way human beings do, and they are indifferent to what happens (if we can even speak of indifference in a case where no desire or aversion is possible).

No doubt Boden is right; it’s surely true at least that no current computer has anything that’s really the same as human motivation. For me, though, she doesn’t provide a convincing account of why human motives are special, and why computers can’t have them, and perhaps doesn’t sufficiently engage with the possibility that robots might take over the world (or at least, do various bad out-of-control things) without having human motives, or caring what happens in the fullest sense. We know already that learning systems set goals by humans are prone to finding cheats or expedients never envisaged by the people who set up the task; while it seems a bit of a stretch to suppose that a supercomputer might enslave all humanity in pursuit of its goal of filling the world with paperclips (about which, however, it doesn’t really care), it seems quite possible real systems might do some dangerous things. Might a self-driving car (have things gone a bit quiet on that front, by the way?) decide that its built-in goal of not colliding with other vehicles can be pursued effectively by forcing everyone else off the road?

What is the ultimate source of human motivation? There are two plausible candidates that Boden doesn’t mention. One is qualia; I think John Searle might say, for example, that it’s things like the quake of hunger, how hungriness really feels, that are the roots of human desire. That nicely explains why computers can’t have them, but for me the old dilemma looms. If qualia are part of the causal account, then they must be naturalisable and in principle available to machines. If they aren’t part of the causal story, how do they influence human behaviour?

Less philosophically, many people would trace human motives to the evolutionary imperatives of survival and reproduction. There must be some truth in that, but isn’t there also something special about human motivation, something detached from the struggle to live?

Boden seems to rest largely on social factors, which computers, as non-social beings, cannot share in. No doubt social factors are highly important in shaping and transmitting motivation, but what about Baby Crusoe, who somehow grew up with no social contact? His mental state may be odd, but would we say he has no more motives than a computer? Then again, why can’t computers be social, either by interacting with each other, or by joining in human society? It seems they might talk to human beings, and if we disallow that as not really social, we are in clear danger of begging the question.

For me the special, detached quality of human motivation arises from our capacity to imagine and foresee. We can randomly or speculatively envisage future states, decide we like or detest them, and plot a course accordingly, coming up with motives that don’t grow out of current circumstances. That capacity depends on the intentionality or aboutness of consciousness, which computers entirely lack – at least for now.

But that isn’t quite what Boden is talking about, I think; she means something in our emotional nature. That – human emotions – is a deep and difficult matter on which much might be said; but at the moment I can’t really be bothered…

 

Augment me

All I want for a Christmas is a new brain? There seems to have been quite a lot of discussion recently about the prospect of brain augmentation; adding in some extra computing power to the cognitive capacities we have already. Is this a good idea? I’m rather sceptical myself, but then I’m a bit of a Luddite in this area; I still don’t like the idea of controlling a computer with voice commands all that much.

Hasn’t evolution has already optimised the brain in certain important respects? I think there may be some truth in that, but It doesn’t look as if evolution has done a perfect job. There are certainly one or two things about the human nervous system that look as if they could easily be improved. Think of the way our neural wiring crosses over from right to left for no particular reason. You could argue that although that serves no purpose it doesn’t do any real harm either, but what about the way our retinas are wired up from the front instead of the back, creating an entirely unnecessary blind spot where the bundle of nerves actually enters the eye – a blind spot which our brain then stops us seeing, so we don’t even know it’s there?

Nobody is proposing to fix those issues, of course, but aren’t there some obvious respects in which our brains could be improved by adding in some extra computational ability? Could we be more intelligent, perhaps? I think the definition of intelligence is controversial, but I’d say that if we could enhance our ability to recognise complex patterns quickly (which might be a big part of it) that would definitely be a bonus. Whether a chip could deliver that seems debatable at present.

Couldn’t our memories be improved? Human memory appears to have remarkable capacity, but retaining and recalling just those bits of information we need has always been an issue. Perhaps related, we have that annoying inability to hold more than a handful of items in our minds at once, a limitation that makes it impossible for us to evaluate complex disjunctions and implications, so that we can’t mentally follow a lot of branching possibilities very far. It certainly seems that computer records are in some respects sharper, more accurate, and easier to access than the normal human system (whatever the normal human system actually is). It would be great to remember any text at will, for example, or exactly what happened on any given date within our lives. Being able to recall faces and names with complete accuracy would be very helpful to some of us.

On top of that, couldn’t we improve our capacity for logic so that we stop being stumped by those problems humans seem so bad at, like the Wason test? Or if nothing else, couldn’t we just have the ability to work out any arithmetic problem instantly and flawlessly, the way any computer can do?

The key point here, I think, is integration. On the one hand we have a set of cognitive abilities that the human brain delivers. On the other, we have a different set delivered by computers. Can they be seamlessly integrated? The ideal augmentation would mean that, for example, if I need to multiply two seven-digit numbers I ‘just see’ the answer, the way I can just see that 3+1 is 4. If, on the contrary, I need to do something like ask in my head ‘what is 6397107 multiplied by 8341977?’ and then receive the answer spoken in an internal voice or displayed in an imagined visual image, there isn’t much advantage over using a calculator. In a similar way, we want our augmented memory or other capacity to just inform our thoughts directly, not be a capacity we can call up like an external facility.

So is seamless integration possible? I don’t think it’s possible to say for certain, but we seem to have achieved almost nothing to date. Attempts to plug into the brain so far have relied on setting up artificial linkages. Perhaps we detect a set of neurons that reliably fire when we think about tennis; then we can ask the subject to think about tennis when they want to signal ‘yes’, and detect the resulting activity. It sort of works, and might be of value for ‘locked in’ patients who can’t communicate any other way, but it’s very slow and clumsy otherwise – I don’t think we know for sure whether it even works long-term or whether, for example, the tennis linkage gradually degrades.

What we really want to do is plug directly into consciousness, but we have little idea of how. The brain does not modularise its conscious activity to suit us, and it may be that the only places we can effectively communicate with it are the places where it draws data together for its existing input and output devices. We might be able to feed images into early visual processing or take output from nerves that control muscles, for example. But if we’re reduced to that, how much better is that level of integration going to be than simply using our hands and eyes anyway? We can do a lot with those natural built-in interfaces ; simple reading and writing may well be the greatest artificial augmentation the brain can ever get. So although there may be some cool devices coming along at some stage, I don’t think we can look for godlike augmented minds any time soon.

Incidentally, this problem of integration may be one good reason not to worry about robots taking over. If robots ever get human-style motivation, consciousness, and agency, the chances are that they will get them in broadly the way we get them. This suggests they will face the same integration problem that we do; seven-digit multiplication may be intrinsically no easier for them than it is for us. Yes, they will be able to access computers and use computation to help them, but you know, so can we. In fact we might be handier with that old keyboard than they are with their patched-together positronic brain-computer linkage.

 

 

Our unconscious overlords…

alien-superWe are in danger of being eliminated by aliens who aren’t even conscious, says Susan Schneider. Luckily, I think I see some flaws in the argument.

Humans are probably not the greatest intelligences in the Universe, she suggests; others probably have been going for billions of years longer. Perhaps, but maybe they have all attained enlightenment and moved on from this plane, leaving us young dummies the cleverest or the only people around?

Schneider thinks the older cultures are likely to be post-biological, having moved on into machine forms of intelligence. This transition may only take a few hundred years, she suggests, to ‘judge from the human experience’ (Have we transitioned? Did I miss it?). She says transistors are much faster than neurons and computer power is almost indefinitely expandable, so AI will end up much cleverer than us.

Then there may be a problem over controlling these superlatively bright computers, as foreseen by Stephen Hawking, Elon Musk, and Bill Gates. Bill Gates? The man who, by exploiting the monopoly handed to him by IBM was able to impose on us all the crippled memory management of DOS and the endless vulnerabilities of Windows? Well, OK; not sure he has much idea about technology, but he’s got form on trying to retain control of things.

Schneider more or less takes it for granted that computation is cogitation, and that faster computation means smarter thinking. It’s true that computers have become very good at games we didn’t think they could play at all, and she reminds us of some examples. But to take over from human beings, computers need more than just computation. To mention two things, they need agency and intentionality, and to date they haven’t shown any capacity at all for either. I don’t rule out the possibility of both being generated artificially in future, but the ever-growing ability of computers to do more sums more quickly is strictly irrelevant. Those future artificial people of whom we know nothing may be able to exploit the power of computation – but so can we. If computers are good at winning battles, our computers can fight their computers.

Schneider also takes it for granted that her computational aliens will be hostile and likely to come over and fuck us up good if they ever know we exist. They might, for example, infect our systems with computer viruses (probably not, I think, because without Bill Gates providing their operating systems computer viruses probably remained a purely theoretical matter for them). Sending signals out into the galaxy, she reckons, is a really bad idea; our radio signals are already out there but luckily it’s faint and easily missed (even by unimaginably super-intelligent aliens, it seems). Premature to worry, surely, because even our earliest radio signals can be no more than about a hundred light years away so far – not much of a distance in galactic terms. But why would super-intelligent entities behave like witless bullies anyway? Somewhere between benign and indifferent seems a more likely attitude.

To me this whole scenario seems to embody a selective prognosis anyway. The aliens have overcome the limitation of the speed of light, they feed off black holes (no clue, sorry) but they still run on the computation we currently think is really smart. A hundred years ago no-one would have supposed computation was going to be the dominant technology of our decade, let alone the next million years; maybe by 2116 we’ll look back on it the way we fondly remember steam locomotion.

Schneider’s most arresting thought is that her dangerous computational aliens might lack qualia, and so in that sense not be conscious. It seems to me more natural to suppose that acquiring human-style thought would necessarily involve acquiring human-style qualia. Schneider seems to share the Searlian view that qualia have something to do with unknown biological qualities of neural tissue which silicon can never share. Even if qualia could be engineered into silicon, why would the aliens bother, she asks – it’s just an extra overhead that might add unwanted ethical issues. Most surprisingly, she supposes that we might be able to test the proposition! Suppose that for medical reasons we replace parts of a functioning human brain with chips, we might then find that qualia are lost.

But how would we know? Ex hypothesi, qualia have no causal powers and so could not cause any change in our behaviour. Even if the qualia vanished, the fact could not be reported. None of the things we say about qualia were caused by qualia; that’s one of the bizarre things about them.

Anyway, I say if we’re going to indulge in this kind of wild speculation, let’s really go big; I say the super-intelligent aliens will be powered by hyper-computation, a technology that makes our concept of computation look like counting on your fingers; and they’ll have not only qualia, but hyper-qualia, experiential phenomenologica whose awesomeness we cannot even speak of. They will be inexpressibly kindly and wise and will be be borne to Earth to visit us on special wave-forms, beyond our understanding but hugely hyperbolic…