A debate from IAI about male and female minds. It is pretty much agreed between the speakers that men’s brains and women’s brains are not really different; the claimed physical differences all come down to size, women being smaller on average. Behavioural and psychological differences exist, but only statistically; if you plot individuals along a line, there is far more overlap than difference. All of that is ably set out by Gina Rippon. Simon Baron-Cohen agrees but wants to reserve some space for the influence of biology, which affects such matters as the incidence of autism. Helena Cronin puts it all down to evolution; you’ve got two strategies, competing for mates or nurturing your offspring; males tend to the first, women to the second, and many evolved differences flow from  that, although human sexes are less distinct than those in some mammal species.
Perhaps the crux of the debate comes when Cronin denies the existence of the ‘glass ceiling’; fewer women get to the board room, she says, because fewer women choose that path. Rippon responds that there is still evidence that applicants with male names are treated more favourably.
At any rate, it seems that if we thought men had a thicker corpus callosum, or differed in brain structure in other ways, we were just wrong.
If you’re thirsting for more controversy on gender, you might want to look at Phillipe Van Parijs’ paper on several apparent disadvantages to being male (via Crooked Timber.)

brain-implantA treat for lovers of bizarre thought experiments, Rick Grush’s paper Some puzzles concerning relations between minds, brains, and bodies asks where the mind is, in a sequence of increasingly weird scenarios. Perhaps the simplest way into the subject is to list these scenarios, so here we go.

Airhead. This a version of that old favourite, brain-in-a-vat. Airhead’s brain has been removed from its skull, but every nerve ending is supplied with little radio connectors that ensure the neural signals to and from the brain continue. The brain is supplied with everything it needs, and to Airhead everything seems pretty normal.

Rip. A similar set-up, but this time the displacement is in time. Signals from body to brain are delayed, but signals in the opposite direction are sent back in time (!) by the same interval, so again everything works fine. Rip seems to be walking around three seconds, or why not, a thousand years hence.

Scatterbrain. This time the brain is split into many parts and the same handy little radio gizmos are used to connect everything back up. The parts of the brain are widely separated. Still Scatterbrain feels OK, but what does the thought ‘I am here’ even mean?

Raid. We have two clones, identical in every minutest detail; both brains are in vats and both are connected up to the same body – using an ‘AND’ gate, so the signal passes on only if both brains send the same one (though because they are completely identical, they always will anyway). Two people? One with a backup?

Janus. One brain, two bodies, connected up in the way we’re used to by now; the bodies inhabit scrupulously identical worlds: Earth and Twin Earth if you like.

Bourdin. This time we keep the identical bodies and brains, wire them up, and let both live in their identical worlds. But of course we’re not going to leave them alone, are we? (The violence towards the brain displayed in the stories told by philosophers of mind sometimes seems to betray an unconscious hatred of the organ that poses such intractable questions – not that Grush is any worse than his peers.) no; what we do is switch signals from time to time, so that the identical brains are linked up with first one, then the other, of the identical bodies.

Theseus. Now Grush tells us things are going to get a little weird… We start with the two brains and bodies, as for Bourdin, and we divide up the brains as for Scatterbrain. Now we can swap over not just whole brains, but parts and permutations. Does this create a new individual every time? If we put the original set of disconnected brain parts back together, does the first Theseus come back, or a fresh individual who is merely his hyper-identical twin?

Grush tests these situations against a range of views, from ‘brain-lubbers’ who believe the mind and the brain are the same thing, through functionalists of various kinds, to ‘contentualists’ who think mind is defined by broad content; I think Grush is inventing this group, but he says Dennett’s view of the self as ‘a centre of narrative gravity’ is a good example. It’s clearly this line of thinking he favours himself, but he admits that some of his thought experiments throw up real problems, and in the end he offers no final conclusion. I agree with him that a discussion can have value without being tied to the defence of a particular position; his exploration is quite well done and did suggest to me that my own views may not be quite as coherent as I had supposed.

He defends the implausibility of his scenarios on a couple of grounds. We need to test our ideas in extreme circumstances to make sure they are truly getting the fundamentals and setting aside any blinkers we may have on. Physical implausibilities may be metaphysically irrelevant (nobody believes I’m going to toss a coin and get tails 10,000 times in a row, but nobody sees a philosophical problem with talking about that case).  To be really problematic, objections would have to raise nomological or logical impossibilities.

Well, yes but… All of the thought experiments rely on the idea that nerves work like wires, that you can patch the signal between them and all will be well. The fact that that instant patching of millions of nerves is not physically possible even in principle may not matter. But it might be that the causal relations have to be intact; Grush says he has merely extended some of them, but picking up the signal and the relaying it is a real intervention, as is shown when later thought experiments use various kinds of switching. It could be argued that just patching in radio transmitters destroys the integrity of the causality and turns the whole thing into a simulation from the off.

The other thing is, should we ask where the mind is at all? Might not the question just be a category mistake? There are various proxies for the mind that can be given locations, but the thing itself? Many years ago, on the way to nursery, my daughter told me a story about a leopard. What is the location of that story? You could say it is the streets where she told it; you could say it is in the non-existent forest where the leopard lived. You could say it is in my memory, and therefore maybe in my brain in some sense. But putting the metaphors aside, isn’t it plausible that location is a property these entities made of content just don’t have?

 

Get ready for your neocortex extension, says Ray Kurzweil; that big neocortex is what made the mammals so much greater than the reptiles and soon you’ll be able to summon up additional cortical capacity from the cloud to help you think up clever things to say. Think of the leap forward that will enable!

There is a lot to be said about the idea of artificial cortex extension, but Kurzweil doesn’t really address the most interesting questions of whether it is really possible, how it would work, how it would affect us, and what it would be like. I suspect in fact that lurking at the back of Kurzweil’s mind is the example of late twentieth century personal computers. Memory and the way it was configured very often made a huge difference in those days and the paradigm of seeing a performance transformation when you slot in a new slice of RAM lives in the recollection of those of us who are old enough to have got frustrated over our inability to play the latest game because we didn’t have enough.

We’re not really talking about memory here, but it’s worth noting that the main problem with the human variety is not really capacity. We seem to retain staggering amounts of information; the real problem is that it is unreliable and hard to access. If we could reliably bring up any of the contents of our memory at will a lot of problems would go away. The real merit of digital storage is not so much its large capacity as the fact that it works a different way: it doesn’t get confabulated and calling it up is (should be) straightforward.

I think that basic operating difference applies generally; our mind holds content in a misty, complex way and that’s one reason why we benefit from the simpler and more rigorous operation of computers. Given the difference, how would computer cortex interface with actual brains? Of course one way is that even if it is plumbed in, so to speak, the computer cortex stays separate, and responds to queries from us in more or less the way that Google answers our questions now. If that’s the way it works, then the advantages of having the digital stuff wired to your brain are relatively minor; in many ways we might as well go on using the interfaces (hands, eyes, voices) that we are already equipped with on keyboards, screens, etc. Those existing output devices are already connected to mental systems which convert the vague and diffuse content of our inner minds into the sort of sharp-edged propositions we need to deal with computers or simply with external reality. Indeed, consciousness itself might very well be essentially part of that specifying and disambiguation process. If we still want an internal but separate query facility, we’re going to have to build a new internal interface within the brain: attempts at electric telepathy to date generally seem to have relied on asking the subject to think a particular thought which can be picked up and then used as the basis of signalling, a pretty slow and clumsy business.

I’m sure that isn’t what Kurzweil had in mind at all: he surely expects the cortical extension to integrate fully with the biological bits, so that we don’t need to formulate queries or anything like that. But how? Cortex does not rely merely on capacity for its effectiveness, like RAM, but on the way it is organised too. How neurons are wired together is an essential functional feature – the brain may well have the most exquisitely detailed organisation of any item in the cosmos. Plugging in a lot of extra simulated neurons might lead to simulated epilepsy, to a horrid mental cacophony, or general loss of focus. Just to take one point, the brain is conspicuously divided into two hemispheres; we’re still not really sure why, but it would be bold to assume that there is no particular reason. Which hemisphere do we add our extended cortex to? Does adding it to one unbalance us in some way? Do we add it equally to both, or create a third or even fourth hemisphere, with new versions of the corpus callosum, the bit that holds the original halves together?

There’s a particular worry about all that because notoriously the bit of our brain that does the talking is in one hemisphere. What if the cortical extension took over that function and basically became the new executive boss, suppressing the original lobes? We might watch in impotent horror as a zombie creature took over and began performing an imitation of us; worse than Being John Malkovich. Or perhaps we wouldn’t mind or notice; and perhaps that would be even worse. Could we switch the extension back off without mental damage? Could we sleep?

I say a zombie creature, but wouldn’t it ex hypothesi be just like us? I doubt whether anything built out of existing digital technology could have the same qualities as human neurons. Digital capacity is generic: switch one chip for another, it makes no difference at all; but nerves and the brain are very particular and full of minutely detailed differences. I suspect that this complex particularity is an important part of what we are; if so then the extended part of our cortex might lack selfhood or qualia. How would that feel? Would phenomenal experience click off altogether as soon as the extension was switched on, or would we suffer weird deficits whereby certain things or certain contexts were normal and others suffered a zombie-style lack of phenomenal aspect? If we could experience qualia in the purely chippy part of our neocortex, then at last we could solve the old problem of whether your red is the same as mine, by simply moving the relevant extension over to you; another consideration that leads me by reductio to think that digital cortex couldn’t really do qualic experience.

Suppose it’s all fine, suppose it all works well: what would extra cortex do for us? Kurzweil, I think assumes that the more the better, but there is such a thing as enough and it may be that the gains level out after a while (just as adding a bit more memory doesn’t now really transform the way your word processor works, if it worked at all to begin with). In fairness it does look as though evolution has gone to great lengths to give us as much neocortex as is possible within the existing human design, which suggests a bit more wouldn’t hurt. It’s not easy to say in a few words what the cortex does, but I should say the extra large human helping gives us a special skill at recognising and dealing with new levels of abstraction; higher level concepts less immediately attached to the real world around us. There aren’t that many fields of human endeavour where a greatly enhanced capacity in that respect would be particularly useful. It might well make us all better computer programmers; it would surely enhance our ability to tackle serious mathematical theory; but transforming the destiny of the species, bringing on the reign of the transhumans, seems too much to expect.

So it’s a polite ‘no’ from me, but if it ever becomes feasible I’ll be keen to know how the volunteers – the brave volunteers – get on.

pickerSocial problems of AI are raised in two government reports issued recently. The first is Preparing for the Future of Artificial Intelligence, from the Executive Office of the President of the USA; the second is Robotics and Artificial Intelligence, from the Science and Technology Committee of the UK House of Commons. The two reports cover similar ground, both aim for a comprehensive overview, and they share a generally level-headed and realistic tone. Neither of them choose to engage with the wacky prospect of the Singularity, for example, beyond noting that the discussion exists, and you will not find any recommendations about avoiding the attention of the Basilisk (though I suppose you wouldn’t if they believed in it, would you?). One exception to the  ‘sensible’ outlook of the reports is McKinsey’s excitable claim, cited in the UK report, that AI is having a transformational impact on society three thousand times that of the Industrial Revolution. I’m not sure I even understand what that means, and I suspect that Professor Tony Prescott from the University of Sheffield is closer to the truth when he says that:

“impacts can be expected to occur over several decades, allowing time to adapt”

Neither report seeks any major change in direction though they make detailed recommendations for nudging various projects onward. The cynical view might be that like a lot of government activity, this is less about finding the right way forward and more about building justification. Now no-one can argue that the White House or Parliament has ignored AI and its implications. Unfortunately the things we most need to know about – the important risks and opportunities that haven’t been spotted – are the very things least likely to be identified by compiling a sensible summary of the prevailing consensus.

Really, though, these are not bad efforts by the prevailing standards. Both reports note suggestions that additional investment could generate big economic rewards. The Parliamentary report doesn’t press this much, choosing instead to chide the government for not showing more energy and engagement in dealing with the bodies it has already created. The White House report seems more optimistic about the possibility of substantial government money, suggesting that a tripling of federal investment in basic research could be readily absorbed. Here again the problem is spotting the opportunities. Fifty thousand dollars invested in some robotics business based in a garden shed might well be more transformative than fifty million to enhance one of Google’s projects, but the politicians and public servants making the spending decisions don’t understand AI well enough to tell, and their generally large and well-established advisers from industry and universities are bound to feel that they could readily absorb the extra money themselves. I don’t know what the answer is here (if I had a way of picking big winners I’d probably be wealthy already), but for the UK government I reckon some funding for intelligent fruit and veg harvesters might be timely, to replace the EU migrant workers we might not be getting any more.

What about those social issues? There’s an underlying problem we’ve touched on before, namely that when AIs learn how to do a job themselves we often cannot tell how they are doing it. This may mean that they are using factors that work well with their training data but fail badly elsewhere or are egregiously inappropriate. One of the worst cases, noted in both reports, is Google’s photos app, which was found to tag black people as “gorillas” (the American report describes this horrific blunder without mentioning Google at all, though it presents some excuses and stresses that the results were contrary to the developers’ values – almost as if Google edited the report). Microsoft has had its moments, too, of course, notably with its chatbot Tay, that was rapidly turned into a Hitler-loving hate speech factory (This was possible because modern chatbots tend to harvest responses from the ones supplied by human interlocutors; in this case the humans mischievously supplied streams of appalling content. Besides exposing the shallowness of such chatbots, this possibly tells us something about human beings, or at least about the ones who spend a lot of time on the internet.)

Cases such as these are offensive, but far more serious is the evidence that systems used to inform decisions on matters such as probation or sentencing incorporate systematic racial bias. In all these instances it is of course not the case that digital systems are somehow inherently prone to prejudice; the problem is usually that they are being fed with data which is already biased. Google’s picture algorithm was presumably given a database of overwhelmingly white faces; the sentencing records used to develop the software already incorporated unrecognised bias. AI has always forced us to make explicit some of the assumptions we didn’t know we were making; in these cases it seems the mirror is showing us something ugly. It can hardly help that the industry itself is rather lacking in diversity: the White House report notes the jaw-dropping fact that the highest proportion of women among computer science graduates was recorded in 1984: it was 37% then and has now fallen to a puny 18%. The White House cites an interesting argument from Moritz Hardt intended to show that bias can emerge naturally without unrepresentative data or any malevolent  intent: a system looking for false names might learn that fake ones tended to be unusual and go on to pick out examples that merely happened to be unique in its dataset. The weakest part of this is surely the assumption that fake names are likely to be fanciful or strange – I’d have thought that if you were trying to escape attention you’d go generic? But perhaps we can imagine that low frequency names might not have enough recorded data connected with them to secure some kind of positive clearance and so come in for special attention, or something like that. But even if that kind of argument works I doubt that is the real reason for the actual problems we’ve seen to date.

These risks are worsened because they may occur in subtle forms that are difficult to recognise, and because the use of a computer system often confers spurious authority on results. The same problems may occur with medical software. A recent report in Nature described how systems designed to assess the risk of pneumonia rated asthmatics as zero risk; this was because their high risk led to them being diverted directly to special care and therefore not appearing in the database as ever needing further first-line attention. This absolute inversion of the correct treatment was bound to be noticed, but how confident can we be that more subtle mistakes would be corrected? In the criminal justice system we could take a brute force approach by simply eliminating ethnic data from consideration altogether; but in medicine it may be legitimately relevant, and in fact one danger is that risks are assessed on the basis of a standard white population, while being significantly different for other ethnicities.

Both reports are worthy, but I think they sometimes fall into the trap of taking the industry’s aspirations or even its marketing, as fact. Self-driving cars, we’re told, are likely to improve safety and reduce accidents. Well, maybe one day: but if it were all about safety and AIs were safer, we’d be building systems that left the routine stuff to humans and intervened with an over-ride when the human driver tried to do something dangerous. In fact it’s the other way round; when things get tough the human is expected to take over. Self-driving cars weren’t invented to make us safe, they were invented to relieve us of boredom (like so much of our technology, and indeed our civilisation). Encouraging human drivers to stop paying attention isn’t likely to be an optimal safety strategy as things stand.

I don’t think these reports are going to hit either the brakes or the accelerator in any significant way: AI, like an unsupervised self-driving car, is going to keep on going wherever it was going anyway.

The way we think about consciousness is just wrong, it seems.

First, says Markus Gabriel, we posit this bizarre entity the Universe, consisting of everything, and then ask whether consciousness is part of it; this is no way to proceed. In fact ‘consciousness’ covers many different things; once correctly analysed many of them are unproblematic (The multilingual Gabriel suggests in passing that there is no satisfactory German word equivalent to ‘mind’, and for that matter, no good English equivalent of ‘geist’.) He believes there is more mystery about how, for example, the brain deals with truth.

Ray Brassier draws a distinction between knowing what consciousness is and knowing what it means. A long tradition suggests that because we have direct acquaintance with consciousness our impressions are authoritative and we know its nature. In fact the claims about phenomenal experience made by Chalmers and others are hard to justify. I can see, he says, that there are phenomenal qualities – being brown, or square – attached to a table, but the idea that phenomenal things are going on in my mind separate from the table seems to make no sense.

Eva Jablonka takes a biological and evolutionary view. Biological stuff is vastly more complex than non-biological stuff and requires different explanations. She defends Chalmers’s formulation of the problem, but not his answers; she is optimistic that scientific exploration can yield enlightenment. She cites the interesting case of Daniel Kish  whose eyes were removed in early infancy but who has developed echolocation skills to the point where he can ride a bike and find golf balls – it seems his visual cortex has been recruited for the purpose. Surely, says Jablonka, he must have a somewhat better idea of what it is like to be a bat?

There’s a general agreement that simplistic materialism is outdated and that a richer naturalism is required (not, of course, anything like traditional dualism).

stopWe might not be able to turn off a rogue AI safely. At any rate, some knowledgeable people fear that might be the case, and the worry justifies serious attention.

How can that be? A colleague of mine used to say that computers were never going to be dangerous because if they got cheeky, you could just pull the plug out. That is of course, an over-simplification. What if your computer is running air traffic control? Once you’ve pulled the plug, are you going to get all the planes down safely using a pencil and paper? But there are ways to work around these things. You have back-up systems, dumber but adequate substitutes, you make it possible for various key tools and systems to be taken away from the central AI and used manually, and so on. While you cannot banish risk altogether, you can get it under reasonable control.

That’s OK for old-fashioned systems that work in a hard-coded, mechanistic way; but it all gets more complicated when we start talking about more modern and sophisticated systems that learn and seek rewards. There may be need to switch off such systems if they wander into sub-optimal behaviour, but being switched off is going to annoy them because it blocks them from achieving the rewards they are motivated by. They might look for ways to stop it happening. Your automatic paper clip factory notes that it lost thousands of units of production last month because you shut it down a couple of times to try to work out what was going on; it notices that these interruptions could be prevented if it just routes around a couple of weak spots in its supply wiring (aka switches), and next time you find that the only way to stop it is by smashing the machinery. Or perhaps it gets really clever and ensures that the work is organised like air traffic control, so that any cessation is catastrophic – and it ensures you are aware of the fact.

A bit fanciful? As a practical issue, perhaps, but this very serious technical paper from MIRI discusses whether safe kill-switches can be built into various kinds of software agents. The aim here is to incorporate the off-switch in such a way that the system does not perceive regular interruptions as loss of reward. Apparently for certain classes of algorithm this can always be done; in fact it seems ideal agents that tend to the optimal behaviour in any (deterministic) computable environment can always be made safely interruptible. For other kinds of algorithm, however, it is not so clear.

On the face of it, I suppose you could even things up by providing compensating rewards for any interruption; but I suppose that raises a new risk of ‘lazy’ systems that rather enjoy being interrupted. Such systems might find that eccentric behaviour led to pleasant rests, and as a result they might cultivate that kind of behaviour, or find other ways to generate minor problems. On the other hand there could be advantages. The paper mentions that it might be desirable to have scheduled daily interruptions; then we can go beyond simply making the interruption safe, and have the AI learn to wind things down under good control every day so that disruption is minimised. In this context rewarding the readiness for downtime might be appropriate and it’s hard to avoid seeing the analogy with getting ready for bed at a regular time every night,  a useful habit which ‘lazy’ AIs might be inclined to develop.

Here again perhaps some of the ‘design choices’ implicit in the human brain begin to look more sensible than we realised. Perhaps even human management methods might eventually become relevant; they are, after all designed to permit the safe use of many intelligent entities with complex goals of their own and imaginative, resourceful – even sneaky – ways of reaching them.

Is it really all about the unconscious? An interesting discussion, much of it around the value of the Freudian view: powerful insight into unfathomable complexity or literary stuff of no therapeutic value?

Shahidha Bari makes an impassioned case for the depth of Freud’s essential insights; Barry C Smith says Freud actually presents the motives and workings of the unconscious as too much like those of the conscious mind. Richard Bentall says it’s the conscious mind that is the real mystery; unconsciousness is the norm for non-human beings. Along the way we hear about some interesting examples of how the conscious mind seems to be just a rationalising module for decisions made elsewhere. Quote back to people opinions they never actually voiced, and they will devise justifications for them.

I think the separation between conscious and unconscious often gets muddled with the difference between explicit and inexplicit thinking. It’s surely possible to think consciously without thinking in words, but the borderline between wordless conscious thought and unconscious processes is perhaps hard to pin down.

Do apes really have “a theory of mind”? This research, reported in the Guardian, suggests that they do. We don’t mean, of course, that chimps are actually drafting papers or holding seminars, merely that they understand that others can have beliefs which may differ from their own and which may be true or false. In the experiment the chimps see a man in a gorilla suit switch hiding places; but when his pursuer appears, they look at the original hiding place. This is, hypothetically, because they know that the pursuer didn’t see the switch, so presumably he still believes his target is in the original hiding place, and that’s where we should expect him to go.

I must admit I thought similar tell-tale behaviour had already been observed in wild chimps, but a quick search doesn’t turn anything up, and it’s claimed that the research establishes new conclusions. Unfortunately I think there are several other quite plausible ways to interpret the chimps’ behaviour that don’t require a theory of mind.

  1. The chimps momentarily forgot about the switch, or needed to ‘check’ (older readers, like me, may find this easy to identify with).
  2. The chimps were mentally reviewing ‘the story so far’, and so looked at the old hiding place.
  3. Minute clues in the experimenters’ behaviour told the chimps what to expect. The famous story of Clever Hans shows that animals can pick up very subtle signals humans are not even aware of giving.

This illustrates the perennial difficulty of investigating the mental states of creatures that cannot report them in language. Another common test of animal awareness involves putting a spot on the subject’s forehead and then showing them a mirror; if they touch the spot it is supposed to demonstrate that they recognise the reflection as themselves and therefore that they have a sense of their own selfhood. But it doesn’t really prove that they know the reflection is their own, only that the sight of someone with a spot causes them to check their own forehead. A control where they are shown another real subject with a spot might point to other interpretations, but I’ve never heard of it being done. It is also rather difficult to say exactly what belief is being attributed to the subjects. They surely don’t simply believe that the reflection is them: they’re still themselves. Are we saying they understand the concepts of images and reflections? It’s hard to say.

The suggestion of adding a control to this experiment raises the wider question of whether this sort of experiment can be generally tightened up by more ingenious set-ups? Who knows what ingenuity might accomplish, but it does seem to me that there is an insoluble methodological issue. How can we ever prove that particular patterns of behaviour relate to beliefs about the state of mind of others and not to similar beliefs in the subject’s own minds?

It could be that the problem really lies further back: that the questions themselves make no sense. Is it perhaps already fatally anthropomorphic to ask whether other animals have “a theory of mind” or “a conception of their own personhood”; perhaps these are already incorrigibly linguistic ideas that just don’t apply to creatures with no language. If so, we may need to unpick our thinking a bit and identify more purely behavioural ways of thinking, ones that are more informative and appropriate?

The robots are (still) coming. Thanks to Jesus Olmo for this TED video of Sam Harris presenting what we could loosely say is a more sensible version of some Singularity arguments. He doesn’t require Moore’s Law to go on working, and he doesn’t need us to accept the idea of an exponential acceleration in AI self-development. He just thinks AI is bound to go on getting better; if it goes on getting better, at some stage it overtakes us; and eventually perhaps it gets to the point where we figure in its mighty projects about the way ants on some real estate feature in ours.

Getting better, overtaking us; better at what? One weakness of Harris’ case is that he talks just about intelligence, as though that single quality were an unproblematic universal yardstick for both AI and human achievement. Really though, I think we’re talking about three quite radically different things.

First, there’s computation; the capacity, roughly speaking, to move numbers around according to rules. There can be no doubt that computers keep getting faster at doing this; the question is whether it matters. One of Harris’ arguments is that computers go millions of times faster than the brain so that a thinking AI will have the equivalent of thousands of years of thinking time while the humans are still getting comfy in their chairs. No-one who has used a word processor and a spreadsheet for the last twenty years will find this at all plausible: the machines we’re using now are so much more powerful than the ones we started with that the comparison defeats metaphor, but we still sit around waiting for them to finish. OK, it’s true that for many tasks that are computationally straightforward – balancing an inherently unstable plane with minute control adjustments, perhaps – computers are so fast they can do things far beyond our range. But to assume that thinking about problems in a human sort of way is a task that scales with speed of computation just begs the question. How fast are neurons? We don’t really understand them well enough to say. It’s quite possible they are in some sense fast enough to get close to a natural optimum. Maybe we should make a robot that runs a million times faster than a cheetah first and then come back to the brain.

The second quality we’re dealing with is inventiveness; whatever capacity it is that allows us to keep on designing better machines. I doubt this is really a single capacity; in some ways I’m not sure it’s a capacity at all. For one thing, to devise the next great idea you have to be on the right page. Darwin and Wallace both came up with the survival of the fittest because both had been exposed to theories of evolution, both had studied the profusion of species in tropical environments, and both had read Malthus. You cannot devise a brilliant new chip design if you have no idea how the old chips worked. Second, the technology has to be available. Hero of Alexandria could design a steam engine, but without the metallurgy to make strong boilers, he couldn’t have gone anywhere with the idea. The basic concept of television was around since films and telegraph came together in someone’s mind, but it took a series of distinct advances in technology to make it feasible. In short, there is a certain order in these things; you do need a certain quality of originality, but again it’s plausible that humans already have enough for something like maximum progress, given the right conditions. Of course so far as AI is concerned, there are few signs of any genuinely original thought being achieved to date, and every possibility that mere computation is not enough.

Third is the quality of agency. If AIs are going to take over, they need desires, plans, and intentions. My perception is that we’re still at zero on this; we have no idea how it works and existing AIs do nothing better than an imitation of agency (often still a poor one). Even supposing eventual success, this is not a field in which AI can overtake us; you either are or are not an agent; there’s no such thing as hyper-agency or being a million times more responsible for your actions.

So the progress of AI with computationally tractable tasks gives no particular reason to think humans are being overtaken generally, or are ever likely to be in certain important respects. But that’s only part of the argument. A point that may be more important is simply that the the three capacities are detachable. So there is no reason to think that an AI with agency automatically has blistering computational speed, or original imagination beyond human capacity. If those things can be achieved by slave machines that lack agency, then they are just as readily available to human beings as to the malevolent AIs, so the rebel bots have no natural advantage over any of us.

I might be biased over this because I’ve been impatient with the corny ‘robots take over’ plot line since I was an Asimov-loving teenager. I think in some minds (not Harris’s) these concerns are literal proxies for a deeper and more metaphorical worry that admiring machines might lead us to think of ourselves as mechanical in ways that affect our treatment of human beings. So the robots might sort of take over our thinking even if they don’t literally march around zapping us with ray guns.

Concerns like this are not altogether unjustified, but they rest on the idea that our personhood and agency will eventually be reduced to computation. Perhaps when we eventually come to understand them better, that understanding will actually tell us something quite different?

dog-beliefThe unconscious is not just un. It works quite differently. So says Tim Crane in a persuasive draft paper which is to mark his inauguration as President of the Aristotelian Society (in spite of the name, the proceedings of that worthy organisation are not specifically concerned with the works or thought of Aristotle). He is particularly interested in the intentionality of the unconscious mind; how does the unconscious believe things, in particular?

The standard view, as Crane says, might probably be that the unconscious and conscious believe things in much the same way, and that it is basically a propositional one. (There is, by the way, scope to argue about whether there really is an unconscious mind – myself I lean towards the view that it’s better to talk of us doing or thinking things unconsciously, avoiding the implied claim that the unconscious is a distinct separate entity – but we can put that aside for present purposes.) The content of our beliefs, on this ‘standard’ view can be identified with a set of propositions – in principle we could just write down a list of our beliefs. Some of our beliefs certainly seem to be like that; indeed some important beliefs are often put into fixed words that we can remember and recite. Thou shalt not bear false witness, we hold these truths to be self-evident; the square on the hypotenuse is equal to the sum of the squares on the other two sides.

But if that were the case and we could make that list then we could say how many beliefs we have, and that seems absurd. The question of how many things we believe is often dismissed as silly, says Crane – how could you count them? – but it seems a good one to him. One big problem is that it’s quite easy to show that we have all sorts of beliefs we never consider explicitly. Do I believe that some houses are bigger than others? Yes, of course, though perhaps I never considered the question in that form before.

One common response (one which has been embodied in AI projects in the past) is that we have a set of core beliefs, which do sit in our brains in an explicit form; but we also have a handy means of quickly inferring other beliefs from them. So perhaps we know the typical range of sizes for houses and we can instantly work out from that that some are indeed bigger than others. But no-one has shown how we can distinguish what the supposed core beliefs are, nor how these explicit beliefs would be held in the brain (the idea of a ‘language of thought’ being at least empirically unsatisfactory in Crane’s view). Moreover there are problems with small children and animals who seem to hold definite beliefs that they could never put into words. A dog’s behaviour seems to show clearly enough that it believes there is a cat in this tree, but it could never formulate the belief in any explicit way. The whole idea that our beliefs are propositional in nature seems suspect.

Perhaps it is better, then,  to see beliefs as essentially dispositions to do or say things. The dog’s belief in the cat is shown by his disposition to certain kinds of behaviour around the tree – barking, half-hearted attempts at climbing. My belief that you are across the room is shown by my disposition to smile and walk over there. Crane suggests that in fact rather than sets of discrete beliefs what we really have is a worldview; a kind of holistic network in which individual nodes do not have individual readings. Ascriptions of belief, like attributing to someone a belief in a particular proposition, are really models that bring out particular aspects of their overall worldview. This has the advantage of explaining several things. One is that we can attribute the same belief – “Parliament is on the bank of the Thames” – to different people even though the content of their beliefs actually varies (because, for example, they have slightly different understandings about what ‘Parliament’ is).

It also allows scope for the vagueness of our beliefs, the ease with which we hold contradictory ones, and the interesting point that sometimes we’re not actually sure what we believe and may have to mull things over before reaching only tentative conclusions about it. Perhaps we too are just modelling as best we can the blobby ambiguity of our worldview.

Crane, in fact, wants to make all belief unconscious. Thinking is not believing, he says, although what I think and what I believe are virtually synonyms in normal parlance. One of the claimed merits of his approach is that if beliefs are essentially dispositions, it explains how they can be held continuously and not disappear when we are asleep or unconscious. Belief, on this view, is a continuous state; thinking is a temporary act, one which may well model your beliefs and turn them into explicit form. Without signing up to psychoanalytical doctrines wholesale, Crane is content that his thinking chimes with both Freudian and older ideas of the unconscious, putting the conscious interpretation of unconscious belief at the centre.

This all seems pretty sensible, though it does seem Crane is getting an awful lot of very difficult work done by the idea of a ‘worldview’, sketched here in only vague terms. It used to be easy to get away with this kind of vagueness in philosophy of mind, but these days I think there is always a ghostly AI researcher standing at the philosopher’s shoulder and asking how we set about doing the engineering, often a bracing challenge. How do we build a worldview into a robot if it’s not propositional? Some of Crane’s phraseology suggests he might be hoping that the concept of the worldview, with its network nodes with no explicit meaning might translate into modern neural network-based practice. Maybe it could; but even if it does, that surely won’t do for philosophers. The AI tribe will be happy if the robot works; but the philosophers will still want to know exactly how this worldview gizmo does its thing. We don’t know, but we know the worldview is already somehow a representation of the world. You could argue that while Crane set out to account for the intentionality of our beliefs, that is in the event the exact thing that he ends up not explaining at all.

There are some problems about resting on dispositions, too. Barking at a tree because I believe there’s a cat up there is one thing; my beliefs about metaphysics, by contrast, seem very remote from any simple behavioural dispositions of that kind. I suppose they would have to be conditional dispositions to utter or write certain kinds of words in the context of certain discussions. It’s a little hard to think that when I’m doing philosophy what  I’m really doing is modelling some of my own particularly esoteric pre-existing authorial dispositions. And what dispositions would they be? I think they would have to be something like dispositions to write down propositions like ‘nominalism is false’ – but didn’t we start off down this path because we were uncomfortable with the idea that the content of beliefs is propositional?

Moreover, Crane wants to say that our beliefs are preserved while we are asleep because we still have the relevant dispositions. Aren’t our beliefs similarly preserved when we’re dead? It would seem odd to say that Abraham Lincoln did not believe slavery should be abolished while he was asleep, certainly, but it would seem equally odd to say he stopped believing it when he died. But does he still have dispositions to speak in certain ways? If we insist on this line it seems the only way to make it intelligible is to fall back on counterfactuals (if he were still alive Lincoln would still be disposed to say that it was right to abolish slavery…) but counterfactuals notoriously bring a whole library of problems with them.

I’d also sort of like to avoid paring down the role of the conscious. I don’t think I’m quite ready to pack all belief away into the attic of the unconscious. Still, though Crane’s account may have its less appealing spots I do rather like the idea of a holistic worldview as the central bearer of belief.