More cortex? No thanks…

Get ready for your neocortex extension, says Ray Kurzweil; that big neocortex is what made the mammals so much greater than the reptiles and soon you’ll be able to summon up additional cortical capacity from the cloud to help you think up clever things to say. Think of the leap forward that will enable!

There is a lot to be said about the idea of artificial cortex extension, but Kurzweil doesn’t really address the most interesting questions of whether it is really possible, how it would work, how it would affect us, and what it would be like. I suspect in fact that lurking at the back of Kurzweil’s mind is the example of late twentieth century personal computers. Memory and the way it was configured very often made a huge difference in those days and the paradigm of seeing a performance transformation when you slot in a new slice of RAM lives in the recollection of those of us who are old enough to have got frustrated over our inability to play the latest game because we didn’t have enough.

We’re not really talking about memory here, but it’s worth noting that the main problem with the human variety is not really capacity. We seem to retain staggering amounts of information; the real problem is that it is unreliable and hard to access. If we could reliably bring up any of the contents of our memory at will a lot of problems would go away. The real merit of digital storage is not so much its large capacity as the fact that it works a different way: it doesn’t get confabulated and calling it up is (should be) straightforward.

I think that basic operating difference applies generally; our mind holds content in a misty, complex way and that’s one reason why we benefit from the simpler and more rigorous operation of computers. Given the difference, how would computer cortex interface with actual brains? Of course one way is that even if it is plumbed in, so to speak, the computer cortex stays separate, and responds to queries from us in more or less the way that Google answers our questions now. If that’s the way it works, then the advantages of having the digital stuff wired to your brain are relatively minor; in many ways we might as well go on using the interfaces (hands, eyes, voices) that we are already equipped with on keyboards, screens, etc. Those existing output devices are already connected to mental systems which convert the vague and diffuse content of our inner minds into the sort of sharp-edged propositions we need to deal with computers or simply with external reality. Indeed, consciousness itself might very well be essentially part of that specifying and disambiguation process. If we still want an internal but separate query facility, we’re going to have to build a new internal interface within the brain: attempts at electric telepathy to date generally seem to have relied on asking the subject to think a particular thought which can be picked up and then used as the basis of signalling, a pretty slow and clumsy business.

I’m sure that isn’t what Kurzweil had in mind at all: he surely expects the cortical extension to integrate fully with the biological bits, so that we don’t need to formulate queries or anything like that. But how? Cortex does not rely merely on capacity for its effectiveness, like RAM, but on the way it is organised too. How neurons are wired together is an essential functional feature – the brain may well have the most exquisitely detailed organisation of any item in the cosmos. Plugging in a lot of extra simulated neurons might lead to simulated epilepsy, to a horrid mental cacophony, or general loss of focus. Just to take one point, the brain is conspicuously divided into two hemispheres; we’re still not really sure why, but it would be bold to assume that there is no particular reason. Which hemisphere do we add our extended cortex to? Does adding it to one unbalance us in some way? Do we add it equally to both, or create a third or even fourth hemisphere, with new versions of the corpus callosum, the bit that holds the original halves together?

There’s a particular worry about all that because notoriously the bit of our brain that does the talking is in one hemisphere. What if the cortical extension took over that function and basically became the new executive boss, suppressing the original lobes? We might watch in impotent horror as a zombie creature took over and began performing an imitation of us; worse than Being John Malkovich. Or perhaps we wouldn’t mind or notice; and perhaps that would be even worse. Could we switch the extension back off without mental damage? Could we sleep?

I say a zombie creature, but wouldn’t it ex hypothesi be just like us? I doubt whether anything built out of existing digital technology could have the same qualities as human neurons. Digital capacity is generic: switch one chip for another, it makes no difference at all; but nerves and the brain are very particular and full of minutely detailed differences. I suspect that this complex particularity is an important part of what we are; if so then the extended part of our cortex might lack selfhood or qualia. How would that feel? Would phenomenal experience click off altogether as soon as the extension was switched on, or would we suffer weird deficits whereby certain things or certain contexts were normal and others suffered a zombie-style lack of phenomenal aspect? If we could experience qualia in the purely chippy part of our neocortex, then at last we could solve the old problem of whether your red is the same as mine, by simply moving the relevant extension over to you; another consideration that leads me by reductio to think that digital cortex couldn’t really do qualic experience.

Suppose it’s all fine, suppose it all works well: what would extra cortex do for us? Kurzweil, I think assumes that the more the better, but there is such a thing as enough and it may be that the gains level out after a while (just as adding a bit more memory doesn’t now really transform the way your word processor works, if it worked at all to begin with). In fairness it does look as though evolution has gone to great lengths to give us as much neocortex as is possible within the existing human design, which suggests a bit more wouldn’t hurt. It’s not easy to say in a few words what the cortex does, but I should say the extra large human helping gives us a special skill at recognising and dealing with new levels of abstraction; higher level concepts less immediately attached to the real world around us. There aren’t that many fields of human endeavour where a greatly enhanced capacity in that respect would be particularly useful. It might well make us all better computer programmers; it would surely enhance our ability to tackle serious mathematical theory; but transforming the destiny of the species, bringing on the reign of the transhumans, seems too much to expect.

So it’s a polite ‘no’ from me, but if it ever becomes feasible I’ll be keen to know how the volunteers – the brave volunteers – get on.

Preparing the triumph of brute force?

ray kurzweilThe Guardian had a piece recently which was partly a profile of Ray Kurzweil, and partly about the way Google seems to have gone on a buying spree, snapping up experts on machine learning and robotics – with Kurzweil himself made Director of Engineering.

The problem with Ray Kurzweil is that he is two people. There is Ray Kurzweil the competent and genuinely gifted innovator, a man we hear little from: and then there’s Ray Kurzweil the motor-mouth, prophet of the Singularity, aspirant immortal, and gushing fountain of optimistic predictions. The Guardian piece praises his record of prediction, rather oddly quoting in support his prediction that by the year 2000 paraplegics would be walking with robotic leg prostheses – something that in 2014 has still not happened. That perhaps does provide a clue to the Kurzweil method: if you issue thousands of moderately plausible predictions, some will pay off. A doubtless-apocryphal story has it that at AI conferences people play the Game of Kurzweil. Players take turns to offer a Kurzweilian prediction (by 2020 there will be a restaurant where sensors sniff your breath and the ideal meal is got ready without you needing to order; by 2050 doctors will routinely use special machines to selectively disable traumatic memories in victims of post-traumatic stress disorder; by 2039 everyone will have an Interlocutor, a software agent that answers the phone for us, manages our investments, and arranges dates for us… we could do this all day, and Kurzweil probably does). The winner is the first person to sneak in a prediction of something that has in fact happened already.

But beneath the froth is a sharp and original mind which it would be all too easy to underestimate. Why did Google want him? The Guardian frames the shopping spree as being about bringing together the best experts and the colossal data resources to which Google has access. A plausible guess would be that Google wants to improve its core product dramatically. At the moment Google answers questions by trying to provide a page from the web where some human being has already given the answer; perhaps the new goal is technology that understands the question so well that it can put together its own answer, gathering and shaping selected resources in very much the way a human researcher working on a bespoke project might do.

But perhaps it goes a little further: perhaps they hope to produce something that will interact with humans in a human-like way.  A piece of software like that might well be taken to have passed the Turing test, which in turn might be taken to show that it was, to all intents and purposes, a conscious entity. Of course, if it wasn’t conscious, that might be a disastrous outcome; the nightmare scenario feared by some in which our mistake causes us to nonsensically award the software human rights, and/or  to feel happier about denying them to human beings.

It’s not very likely that the hypothetical software (and we must remember that this is the merest speculation) would have even the most minimal forms of consciousness. We might take the analogy of Google Translate; a hugely successful piece of kit, but one that produces its translations with no actual understanding of the texts or even the languages involved. Although highly sophisticated, it is in essence a ‘brute force’ solution; what makes it work is the massive power behind it and the massive corpus of texts it has access to.  It seems quite possible that with enough resources we might now be able to produce a credible brute force winner of the Turing Test: no attempt to fathom the meanings or to introduce counterparts of human thought, just a massive repertoire of canned responses, so vast that it gives the impression of fully human-style interaction. Could it be that Google is assembling a team to carry out such a project?

Well, it could be. However, it could also be that cracking true thought is actually on the menu. Vaughan Bell suggests that the folks recruited by Google are honest machine learning types with no ambitions in the direction of strong AI. Yet, he points out, there are also names associated with the trendy topic of deep learning. The neural networks (but y’know, deeper) which deep learning uses just might be candidates for modelling human neuron-style cognition. Unfortunately it seems quite possible that if consciousness were created by deep learning methods, nobody would be completely sure how it worked or whether it was real consciousness or not. This is a lamentable outcome: it’s bad enough to have robots that naive users think are people; having robots and genuinely not knowing whether they’re people or not would be deeply problematic.

Probably nothing like that will happen: maybe nothing will happen. The Guardian piece suggests Kurzweil is a bit of an outsider: I don’t know about that.  Making extravagantly optimistic predictions while only actually delivering much more modest incremental gains? He sounds like the personification of the AI business over the years.