All I want for a Christmas is a new brain? There seems to have been quite a lot of discussion recently about the prospect of brain augmentation; adding in some extra computing power to the cognitive capacities we have already. Is this a good idea? I’m rather sceptical myself, but then I’m a bit of a Luddite in this area; I still don’t like the idea of controlling a computer with voice commands all that much.

Hasn’t evolution has already optimised the brain in certain important respects? I think there may be some truth in that, but It doesn’t look as if evolution has done a perfect job. There are certainly one or two things about the human nervous system that look as if they could easily be improved. Think of the way our neural wiring crosses over from right to left for no particular reason. You could argue that although that serves no purpose it doesn’t do any real harm either, but what about the way our retinas are wired up from the front instead of the back, creating an entirely unnecessary blind spot where the bundle of nerves actually enters the eye – a blind spot which our brain then stops us seeing, so we don’t even know it’s there?

Nobody is proposing to fix those issues, of course, but aren’t there some obvious respects in which our brains could be improved by adding in some extra computational ability? Could we be more intelligent, perhaps? I think the definition of intelligence is controversial, but I’d say that if we could enhance our ability to recognise complex patterns quickly (which might be a big part of it) that would definitely be a bonus. Whether a chip could deliver that seems debatable at present.

Couldn’t our memories be improved? Human memory appears to have remarkable capacity, but retaining and recalling just those bits of information we need has always been an issue. Perhaps related, we have that annoying inability to hold more than a handful of items in our minds at once, a limitation that makes it impossible for us to evaluate complex disjunctions and implications, so that we can’t mentally follow a lot of branching possibilities very far. It certainly seems that computer records are in some respects sharper, more accurate, and easier to access than the normal human system (whatever the normal human system actually is). It would be great to remember any text at will, for example, or exactly what happened on any given date within our lives. Being able to recall faces and names with complete accuracy would be very helpful to some of us.

On top of that, couldn’t we improve our capacity for logic so that we stop being stumped by those problems humans seem so bad at, like the Wason test? Or if nothing else, couldn’t we just have the ability to work out any arithmetic problem instantly and flawlessly, the way any computer can do?

The key point here, I think, is integration. On the one hand we have a set of cognitive abilities that the human brain delivers. On the other, we have a different set delivered by computers. Can they be seamlessly integrated? The ideal augmentation would mean that, for example, if I need to multiply two seven-digit numbers I ‘just see’ the answer, the way I can just see that 3+1 is 4. If, on the contrary, I need to do something like ask in my head ‘what is 6397107 multiplied by 8341977?’ and then receive the answer spoken in an internal voice or displayed in an imagined visual image, there isn’t much advantage over using a calculator. In a similar way, we want our augmented memory or other capacity to just inform our thoughts directly, not be a capacity we can call up like an external facility.

So is seamless integration possible? I don’t think it’s possible to say for certain, but we seem to have achieved almost nothing to date. Attempts to plug into the brain so far have relied on setting up artificial linkages. Perhaps we detect a set of neurons that reliably fire when we think about tennis; then we can ask the subject to think about tennis when they want to signal ‘yes’, and detect the resulting activity. It sort of works, and might be of value for ‘locked in’ patients who can’t communicate any other way, but it’s very slow and clumsy otherwise – I don’t think we know for sure whether it even works long-term or whether, for example, the tennis linkage gradually degrades.

What we really want to do is plug directly into consciousness, but we have little idea of how. The brain does not modularise its conscious activity to suit us, and it may be that the only places we can effectively communicate with it are the places where it draws data together for its existing input and output devices. We might be able to feed images into early visual processing or take output from nerves that control muscles, for example. But if we’re reduced to that, how much better is that level of integration going to be than simply using our hands and eyes anyway? We can do a lot with those natural built-in interfaces ; simple reading and writing may well be the greatest artificial augmentation the brain can ever get. So although there may be some cool devices coming along at some stage, I don’t think we can look for godlike augmented minds any time soon.

Incidentally, this problem of integration may be one good reason not to worry about robots taking over. If robots ever get human-style motivation, consciousness, and agency, the chances are that they will get them in broadly the way we get them. This suggests they will face the same integration problem that we do; seven-digit multiplication may be intrinsically no easier for them than it is for us. Yes, they will be able to access computers and use computation to help them, but you know, so can we. In fact we might be handier with that old keyboard than they are with their patched-together positronic brain-computer linkage.




  1. 1. Paul Torek says:

    I think you undersell current brain/machine interfaces. Sure, they’re primitive, but the brain is remarkably adaptable. In this example, “an array of electrodes is placed on the surface of the visual cortex, the part of the brain that processes visual information. Delivering electrical pulses here should tell the brain to perceive patterns of light.”

    I’m not worried about robots getting human-style motivation. I’m worried about them getting INhuman-style motivation. Just because “motivation” “agency” and so on are conceived with humans as the central example doesn’t mean other beings can’t exhibit them, perhaps even more so than we do. Could a robot be capable of mapping a wide range of if-then causal conditionals and then selecting the one that maximizes the profit of XYZ Corporation? And could the most profitable option be to rapidly manufacture more robots and increase their decision-making roles within the corporation and, ultimately, the global economy? That seems all too possible. And when this turns deeply dystopian, could the robot notice, but fail to care, that “maximize profit” wasn’t REALLY what we wanted when we programmed it, and that we just weren’t thinking far enough ahead?

  2. 2. Peter says:

    I agree about the possibility of feeding images into the visual cortex, though at present the results are surely too rough to be of value to anyone who isn’t blind. But even supposing we could do it perfectly, how much better is that than feeding the same image to the visual cortex by looking at a screen?

    It’s a fascinating idea that there might be different kinds of agency, presumably each with distinct forms of responsibility too – but intuitively it seems impossible to me. I think your robot is just computing and shows no real agency – not that that means it couldn’t be dangerous in the sort of way you suggest!

  3. 3. Scott Bakker says:

    I genuinely fear this stuff, so I’m glad to see you raising the issue, Peter. A couple years back Midwest Studies in Philosophy asked me to write a short story on this very topic for a special issue on science fiction and philosophy: The story attempts to show how augmentation not only possesses problematic knock-on social effects, but how it actually undermines our existing folk cognitive resources.

    As with AI, I think these debates need to take a hard look at *cognitive ecology,* the way things like social cognition, just for instance, take ancestral environments for granted. The degree that they depend on these environments is the degree to which we should expect things like augmentation will prove disastrous in the long run.

  4. 4. Peter Martin says:

    Speaking and listening, or reading and typing are tapping directly into our brain at the level of consciousness. There is no need to physically connect into the wetware.

  5. 5. David Duffy says:

    I’m far less pessimistic. We already have large differences between humans in terms of cognitive abilities and experiences – for example a recent Conversation article reminded me of variation in terms of intensity and usability of mental imagery. We don’t worry that those with a naturally good memory have an impaired humanity compared to vague people like myself.

  6. 6. Scott Bakker says:

    David: “We already have large differences between humans in terms of cognitive abilities and experiences – for example a recent Conversation article reminded me of variation in terms of intensity and usability of mental imagery. We don’t worry that those with a naturally good memory have an impaired humanity compared to vague people like myself.”

    Since these differences belong to our ancestral cognitive ecology, I’m not sure how they provide any basis for optimism. The kinds of differences on the table here have no natural historical precedent. Check out the “Crash Space” piece, and tell me who you think is in the right, the augmented party-goers or the all-natural protester. Who has ‘agency’ in this scenario?

    The point of the story is to show how augmentation scrambles the reader’s own sociocognitive capacities simply by changing what all human sociocognition takes for granted: the possession of shared neurophysiologies.

  7. 7. David Duffy says:

    Hi Scott. Yes, I have previously read and enjoyed this story. Could it have been set in the 60’s, with the characters modifying their mental states using the technologies of that era? Almost. My point is that we already have a really broad range of mental states across different cultures – how about how cerebral functional organisation shifts in written versus oral societies, or in musicians, or religious ecstatics? The psychedelic movement was making it easy for anyone to get into these states of consciousness that otherwise took years of work.

  8. 8. Michael Murden says:

    David – the thing is, even with an imagination as imaginative as Scott’s we can’t know how different from what we’re used to technology might allow us to become. How might our Australopithecus ancestors from two million years ago have felt if we plopped them down in modern London? Nothing that fellow could have done back then could have put him into the state of consciousness of a modern human. If technology could make people who are as different from current humans as current humans are from Australopithecus and do it in the next twenty or thirty years the current humans are in a whole heap of trouble.

Leave a Reply