pickerSocial problems of AI are raised in two government reports issued recently. The first is Preparing for the Future of Artificial Intelligence, from the Executive Office of the President of the USA; the second is Robotics and Artificial Intelligence, from the Science and Technology Committee of the UK House of Commons. The two reports cover similar ground, both aim for a comprehensive overview, and they share a generally level-headed and realistic tone. Neither of them choose to engage with the wacky prospect of the Singularity, for example, beyond noting that the discussion exists, and you will not find any recommendations about avoiding the attention of the Basilisk (though I suppose you wouldn’t if they believed in it, would you?). One exception to the  ‘sensible’ outlook of the reports is McKinsey’s excitable claim, cited in the UK report, that AI is having a transformational impact on society three thousand times that of the Industrial Revolution. I’m not sure I even understand what that means, and I suspect that Professor Tony Prescott from the University of Sheffield is closer to the truth when he says that:

“impacts can be expected to occur over several decades, allowing time to adapt”

Neither report seeks any major change in direction though they make detailed recommendations for nudging various projects onward. The cynical view might be that like a lot of government activity, this is less about finding the right way forward and more about building justification. Now no-one can argue that the White House or Parliament has ignored AI and its implications. Unfortunately the things we most need to know about – the important risks and opportunities that haven’t been spotted – are the very things least likely to be identified by compiling a sensible summary of the prevailing consensus.

Really, though, these are not bad efforts by the prevailing standards. Both reports note suggestions that additional investment could generate big economic rewards. The Parliamentary report doesn’t press this much, choosing instead to chide the government for not showing more energy and engagement in dealing with the bodies it has already created. The White House report seems more optimistic about the possibility of substantial government money, suggesting that a tripling of federal investment in basic research could be readily absorbed. Here again the problem is spotting the opportunities. Fifty thousand dollars invested in some robotics business based in a garden shed might well be more transformative than fifty million to enhance one of Google’s projects, but the politicians and public servants making the spending decisions don’t understand AI well enough to tell, and their generally large and well-established advisers from industry and universities are bound to feel that they could readily absorb the extra money themselves. I don’t know what the answer is here (if I had a way of picking big winners I’d probably be wealthy already), but for the UK government I reckon some funding for intelligent fruit and veg harvesters might be timely, to replace the EU migrant workers we might not be getting any more.

What about those social issues? There’s an underlying problem we’ve touched on before, namely that when AIs learn how to do a job themselves we often cannot tell how they are doing it. This may mean that they are using factors that work well with their training data but fail badly elsewhere or are egregiously inappropriate. One of the worst cases, noted in both reports, is Google’s photos app, which was found to tag black people as “gorillas” (the American report describes this horrific blunder without mentioning Google at all, though it presents some excuses and stresses that the results were contrary to the developers’ values – almost as if Google edited the report). Microsoft has had its moments, too, of course, notably with its chatbot Tay, that was rapidly turned into a Hitler-loving hate speech factory (This was possible because modern chatbots tend to harvest responses from the ones supplied by human interlocutors; in this case the humans mischievously supplied streams of appalling content. Besides exposing the shallowness of such chatbots, this possibly tells us something about human beings, or at least about the ones who spend a lot of time on the internet.)

Cases such as these are offensive, but far more serious is the evidence that systems used to inform decisions on matters such as probation or sentencing incorporate systematic racial bias. In all these instances it is of course not the case that digital systems are somehow inherently prone to prejudice; the problem is usually that they are being fed with data which is already biased. Google’s picture algorithm was presumably given a database of overwhelmingly white faces; the sentencing records used to develop the software already incorporated unrecognised bias. AI has always forced us to make explicit some of the assumptions we didn’t know we were making; in these cases it seems the mirror is showing us something ugly. It can hardly help that the industry itself is rather lacking in diversity: the White House report notes the jaw-dropping fact that the highest proportion of women among computer science graduates was recorded in 1984: it was 37% then and has now fallen to a puny 18%. The White House cites an interesting argument from Moritz Hardt intended to show that bias can emerge naturally without unrepresentative data or any malevolent  intent: a system looking for false names might learn that fake ones tended to be unusual and go on to pick out examples that merely happened to be unique in its dataset. The weakest part of this is surely the assumption that fake names are likely to be fanciful or strange – I’d have thought that if you were trying to escape attention you’d go generic? But perhaps we can imagine that low frequency names might not have enough recorded data connected with them to secure some kind of positive clearance and so come in for special attention, or something like that. But even if that kind of argument works I doubt that is the real reason for the actual problems we’ve seen to date.

These risks are worsened because they may occur in subtle forms that are difficult to recognise, and because the use of a computer system often confers spurious authority on results. The same problems may occur with medical software. A recent report in Nature described how systems designed to assess the risk of pneumonia rated asthmatics as zero risk; this was because their high risk led to them being diverted directly to special care and therefore not appearing in the database as ever needing further first-line attention. This absolute inversion of the correct treatment was bound to be noticed, but how confident can we be that more subtle mistakes would be corrected? In the criminal justice system we could take a brute force approach by simply eliminating ethnic data from consideration altogether; but in medicine it may be legitimately relevant, and in fact one danger is that risks are assessed on the basis of a standard white population, while being significantly different for other ethnicities.

Both reports are worthy, but I think they sometimes fall into the trap of taking the industry’s aspirations or even its marketing, as fact. Self-driving cars, we’re told, are likely to improve safety and reduce accidents. Well, maybe one day: but if it were all about safety and AIs were safer, we’d be building systems that left the routine stuff to humans and intervened with an over-ride when the human driver tried to do something dangerous. In fact it’s the other way round; when things get tough the human is expected to take over. Self-driving cars weren’t invented to make us safe, they were invented to relieve us of boredom (like so much of our technology, and indeed our civilisation). Encouraging human drivers to stop paying attention isn’t likely to be an optimal safety strategy as things stand.

I don’t think these reports are going to hit either the brakes or the accelerator in any significant way: AI, like an unsupervised self-driving car, is going to keep on going wherever it was going anyway.

The way we think about consciousness is just wrong, it seems.

First, says Markus Gabriel, we posit this bizarre entity the Universe, consisting of everything, and then ask whether consciousness is part of it; this is no way to proceed. In fact ‘consciousness’ covers many different things; once correctly analysed many of them are unproblematic (The multilingual Gabriel suggests in passing that there is no satisfactory German word equivalent to ‘mind’, and for that matter, no good English equivalent of ‘geist’.) He believes there is more mystery about how, for example, the brain deals with truth.

Ray Brassier draws a distinction between knowing what consciousness is and knowing what it means. A long tradition suggests that because we have direct acquaintance with consciousness our impressions are authoritative and we know its nature. In fact the claims about phenomenal experience made by Chalmers and others are hard to justify. I can see, he says, that there are phenomenal qualities – being brown, or square – attached to a table, but the idea that phenomenal things are going on in my mind separate from the table seems to make no sense.

Eva Jablonka takes a biological and evolutionary view. Biological stuff is vastly more complex than non-biological stuff and requires different explanations. She defends Chalmers’s formulation of the problem, but not his answers; she is optimistic that scientific exploration can yield enlightenment. She cites the interesting case of Daniel Kish  whose eyes were removed in early infancy but who has developed echolocation skills to the point where he can ride a bike and find golf balls – it seems his visual cortex has been recruited for the purpose. Surely, says Jablonka, he must have a somewhat better idea of what it is like to be a bat?

There’s a general agreement that simplistic materialism is outdated and that a richer naturalism is required (not, of course, anything like traditional dualism).

stopWe might not be able to turn off a rogue AI safely. At any rate, some knowledgeable people fear that might be the case, and the worry justifies serious attention.

How can that be? A colleague of mine used to say that computers were never going to be dangerous because if they got cheeky, you could just pull the plug out. That is of course, an over-simplification. What if your computer is running air traffic control? Once you’ve pulled the plug, are you going to get all the planes down safely using a pencil and paper? But there are ways to work around these things. You have back-up systems, dumber but adequate substitutes, you make it possible for various key tools and systems to be taken away from the central AI and used manually, and so on. While you cannot banish risk altogether, you can get it under reasonable control.

That’s OK for old-fashioned systems that work in a hard-coded, mechanistic way; but it all gets more complicated when we start talking about more modern and sophisticated systems that learn and seek rewards. There may be need to switch off such systems if they wander into sub-optimal behaviour, but being switched off is going to annoy them because it blocks them from achieving the rewards they are motivated by. They might look for ways to stop it happening. Your automatic paper clip factory notes that it lost thousands of units of production last month because you shut it down a couple of times to try to work out what was going on; it notices that these interruptions could be prevented if it just routes around a couple of weak spots in its supply wiring (aka switches), and next time you find that the only way to stop it is by smashing the machinery. Or perhaps it gets really clever and ensures that the work is organised like air traffic control, so that any cessation is catastrophic – and it ensures you are aware of the fact.

A bit fanciful? As a practical issue, perhaps, but this very serious technical paper from MIRI discusses whether safe kill-switches can be built into various kinds of software agents. The aim here is to incorporate the off-switch in such a way that the system does not perceive regular interruptions as loss of reward. Apparently for certain classes of algorithm this can always be done; in fact it seems ideal agents that tend to the optimal behaviour in any (deterministic) computable environment can always be made safely interruptible. For other kinds of algorithm, however, it is not so clear.

On the face of it, I suppose you could even things up by providing compensating rewards for any interruption; but I suppose that raises a new risk of ‘lazy’ systems that rather enjoy being interrupted. Such systems might find that eccentric behaviour led to pleasant rests, and as a result they might cultivate that kind of behaviour, or find other ways to generate minor problems. On the other hand there could be advantages. The paper mentions that it might be desirable to have scheduled daily interruptions; then we can go beyond simply making the interruption safe, and have the AI learn to wind things down under good control every day so that disruption is minimised. In this context rewarding the readiness for downtime might be appropriate and it’s hard to avoid seeing the analogy with getting ready for bed at a regular time every night,  a useful habit which ‘lazy’ AIs might be inclined to develop.

Here again perhaps some of the ‘design choices’ implicit in the human brain begin to look more sensible than we realised. Perhaps even human management methods might eventually become relevant; they are, after all designed to permit the safe use of many intelligent entities with complex goals of their own and imaginative, resourceful – even sneaky – ways of reaching them.

Is it really all about the unconscious? An interesting discussion, much of it around the value of the Freudian view: powerful insight into unfathomable complexity or literary stuff of no therapeutic value?

Shahidha Bari makes an impassioned case for the depth of Freud’s essential insights; Barry C Smith says Freud actually presents the motives and workings of the unconscious as too much like those of the conscious mind. Richard Bentall says it’s the conscious mind that is the real mystery; unconsciousness is the norm for non-human beings. Along the way we hear about some interesting examples of how the conscious mind seems to be just a rationalising module for decisions made elsewhere. Quote back to people opinions they never actually voiced, and they will devise justifications for them.

I think the separation between conscious and unconscious often gets muddled with the difference between explicit and inexplicit thinking. It’s surely possible to think consciously without thinking in words, but the borderline between wordless conscious thought and unconscious processes is perhaps hard to pin down.

Do apes really have “a theory of mind”? This research, reported in the Guardian, suggests that they do. We don’t mean, of course, that chimps are actually drafting papers or holding seminars, merely that they understand that others can have beliefs which may differ from their own and which may be true or false. In the experiment the chimps see a man in a gorilla suit switch hiding places; but when his pursuer appears, they look at the original hiding place. This is, hypothetically, because they know that the pursuer didn’t see the switch, so presumably he still believes his target is in the original hiding place, and that’s where we should expect him to go.

I must admit I thought similar tell-tale behaviour had already been observed in wild chimps, but a quick search doesn’t turn anything up, and it’s claimed that the research establishes new conclusions. Unfortunately I think there are several other quite plausible ways to interpret the chimps’ behaviour that don’t require a theory of mind.

  1. The chimps momentarily forgot about the switch, or needed to ‘check’ (older readers, like me, may find this easy to identify with).
  2. The chimps were mentally reviewing ‘the story so far’, and so looked at the old hiding place.
  3. Minute clues in the experimenters’ behaviour told the chimps what to expect. The famous story of Clever Hans shows that animals can pick up very subtle signals humans are not even aware of giving.

This illustrates the perennial difficulty of investigating the mental states of creatures that cannot report them in language. Another common test of animal awareness involves putting a spot on the subject’s forehead and then showing them a mirror; if they touch the spot it is supposed to demonstrate that they recognise the reflection as themselves and therefore that they have a sense of their own selfhood. But it doesn’t really prove that they know the reflection is their own, only that the sight of someone with a spot causes them to check their own forehead. A control where they are shown another real subject with a spot might point to other interpretations, but I’ve never heard of it being done. It is also rather difficult to say exactly what belief is being attributed to the subjects. They surely don’t simply believe that the reflection is them: they’re still themselves. Are we saying they understand the concepts of images and reflections? It’s hard to say.

The suggestion of adding a control to this experiment raises the wider question of whether this sort of experiment can be generally tightened up by more ingenious set-ups? Who knows what ingenuity might accomplish, but it does seem to me that there is an insoluble methodological issue. How can we ever prove that particular patterns of behaviour relate to beliefs about the state of mind of others and not to similar beliefs in the subject’s own minds?

It could be that the problem really lies further back: that the questions themselves make no sense. Is it perhaps already fatally anthropomorphic to ask whether other animals have “a theory of mind” or “a conception of their own personhood”; perhaps these are already incorrigibly linguistic ideas that just don’t apply to creatures with no language. If so, we may need to unpick our thinking a bit and identify more purely behavioural ways of thinking, ones that are more informative and appropriate?

The robots are (still) coming. Thanks to Jesus Olmo for this TED video of Sam Harris presenting what we could loosely say is a more sensible version of some Singularity arguments. He doesn’t require Moore’s Law to go on working, and he doesn’t need us to accept the idea of an exponential acceleration in AI self-development. He just thinks AI is bound to go on getting better; if it goes on getting better, at some stage it overtakes us; and eventually perhaps it gets to the point where we figure in its mighty projects about the way ants on some real estate feature in ours.

Getting better, overtaking us; better at what? One weakness of Harris’ case is that he talks just about intelligence, as though that single quality were an unproblematic universal yardstick for both AI and human achievement. Really though, I think we’re talking about three quite radically different things.

First, there’s computation; the capacity, roughly speaking, to move numbers around according to rules. There can be no doubt that computers keep getting faster at doing this; the question is whether it matters. One of Harris’ arguments is that computers go millions of times faster than the brain so that a thinking AI will have the equivalent of thousands of years of thinking time while the humans are still getting comfy in their chairs. No-one who has used a word processor and a spreadsheet for the last twenty years will find this at all plausible: the machines we’re using now are so much more powerful than the ones we started with that the comparison defeats metaphor, but we still sit around waiting for them to finish. OK, it’s true that for many tasks that are computationally straightforward – balancing an inherently unstable plane with minute control adjustments, perhaps – computers are so fast they can do things far beyond our range. But to assume that thinking about problems in a human sort of way is a task that scales with speed of computation just begs the question. How fast are neurons? We don’t really understand them well enough to say. It’s quite possible they are in some sense fast enough to get close to a natural optimum. Maybe we should make a robot that runs a million times faster than a cheetah first and then come back to the brain.

The second quality we’re dealing with is inventiveness; whatever capacity it is that allows us to keep on designing better machines. I doubt this is really a single capacity; in some ways I’m not sure it’s a capacity at all. For one thing, to devise the next great idea you have to be on the right page. Darwin and Wallace both came up with the survival of the fittest because both had been exposed to theories of evolution, both had studied the profusion of species in tropical environments, and both had read Malthus. You cannot devise a brilliant new chip design if you have no idea how the old chips worked. Second, the technology has to be available. Hero of Alexandria could design a steam engine, but without the metallurgy to make strong boilers, he couldn’t have gone anywhere with the idea. The basic concept of television was around since films and telegraph came together in someone’s mind, but it took a series of distinct advances in technology to make it feasible. In short, there is a certain order in these things; you do need a certain quality of originality, but again it’s plausible that humans already have enough for something like maximum progress, given the right conditions. Of course so far as AI is concerned, there are few signs of any genuinely original thought being achieved to date, and every possibility that mere computation is not enough.

Third is the quality of agency. If AIs are going to take over, they need desires, plans, and intentions. My perception is that we’re still at zero on this; we have no idea how it works and existing AIs do nothing better than an imitation of agency (often still a poor one). Even supposing eventual success, this is not a field in which AI can overtake us; you either are or are not an agent; there’s no such thing as hyper-agency or being a million times more responsible for your actions.

So the progress of AI with computationally tractable tasks gives no particular reason to think humans are being overtaken generally, or are ever likely to be in certain important respects. But that’s only part of the argument. A point that may be more important is simply that the the three capacities are detachable. So there is no reason to think that an AI with agency automatically has blistering computational speed, or original imagination beyond human capacity. If those things can be achieved by slave machines that lack agency, then they are just as readily available to human beings as to the malevolent AIs, so the rebel bots have no natural advantage over any of us.

I might be biased over this because I’ve been impatient with the corny ‘robots take over’ plot line since I was an Asimov-loving teenager. I think in some minds (not Harris’s) these concerns are literal proxies for a deeper and more metaphorical worry that admiring machines might lead us to think of ourselves as mechanical in ways that affect our treatment of human beings. So the robots might sort of take over our thinking even if they don’t literally march around zapping us with ray guns.

Concerns like this are not altogether unjustified, but they rest on the idea that our personhood and agency will eventually be reduced to computation. Perhaps when we eventually come to understand them better, that understanding will actually tell us something quite different?

dog-beliefThe unconscious is not just un. It works quite differently. So says Tim Crane in a persuasive draft paper which is to mark his inauguration as President of the Aristotelian Society (in spite of the name, the proceedings of that worthy organisation are not specifically concerned with the works or thought of Aristotle). He is particularly interested in the intentionality of the unconscious mind; how does the unconscious believe things, in particular?

The standard view, as Crane says, might probably be that the unconscious and conscious believe things in much the same way, and that it is basically a propositional one. (There is, by the way, scope to argue about whether there really is an unconscious mind – myself I lean towards the view that it’s better to talk of us doing or thinking things unconsciously, avoiding the implied claim that the unconscious is a distinct separate entity – but we can put that aside for present purposes.) The content of our beliefs, on this ‘standard’ view can be identified with a set of propositions – in principle we could just write down a list of our beliefs. Some of our beliefs certainly seem to be like that; indeed some important beliefs are often put into fixed words that we can remember and recite. Thou shalt not bear false witness, we hold these truths to be self-evident; the square on the hypotenuse is equal to the sum of the squares on the other two sides.

But if that were the case and we could make that list then we could say how many beliefs we have, and that seems absurd. The question of how many things we believe is often dismissed as silly, says Crane – how could you count them? – but it seems a good one to him. One big problem is that it’s quite easy to show that we have all sorts of beliefs we never consider explicitly. Do I believe that some houses are bigger than others? Yes, of course, though perhaps I never considered the question in that form before.

One common response (one which has been embodied in AI projects in the past) is that we have a set of core beliefs, which do sit in our brains in an explicit form; but we also have a handy means of quickly inferring other beliefs from them. So perhaps we know the typical range of sizes for houses and we can instantly work out from that that some are indeed bigger than others. But no-one has shown how we can distinguish what the supposed core beliefs are, nor how these explicit beliefs would be held in the brain (the idea of a ‘language of thought’ being at least empirically unsatisfactory in Crane’s view). Moreover there are problems with small children and animals who seem to hold definite beliefs that they could never put into words. A dog’s behaviour seems to show clearly enough that it believes there is a cat in this tree, but it could never formulate the belief in any explicit way. The whole idea that our beliefs are propositional in nature seems suspect.

Perhaps it is better, then,  to see beliefs as essentially dispositions to do or say things. The dog’s belief in the cat is shown by his disposition to certain kinds of behaviour around the tree – barking, half-hearted attempts at climbing. My belief that you are across the room is shown by my disposition to smile and walk over there. Crane suggests that in fact rather than sets of discrete beliefs what we really have is a worldview; a kind of holistic network in which individual nodes do not have individual readings. Ascriptions of belief, like attributing to someone a belief in a particular proposition, are really models that bring out particular aspects of their overall worldview. This has the advantage of explaining several things. One is that we can attribute the same belief – “Parliament is on the bank of the Thames” – to different people even though the content of their beliefs actually varies (because, for example, they have slightly different understandings about what ‘Parliament’ is).

It also allows scope for the vagueness of our beliefs, the ease with which we hold contradictory ones, and the interesting point that sometimes we’re not actually sure what we believe and may have to mull things over before reaching only tentative conclusions about it. Perhaps we too are just modelling as best we can the blobby ambiguity of our worldview.

Crane, in fact, wants to make all belief unconscious. Thinking is not believing, he says, although what I think and what I believe are virtually synonyms in normal parlance. One of the claimed merits of his approach is that if beliefs are essentially dispositions, it explains how they can be held continuously and not disappear when we are asleep or unconscious. Belief, on this view, is a continuous state; thinking is a temporary act, one which may well model your beliefs and turn them into explicit form. Without signing up to psychoanalytical doctrines wholesale, Crane is content that his thinking chimes with both Freudian and older ideas of the unconscious, putting the conscious interpretation of unconscious belief at the centre.

This all seems pretty sensible, though it does seem Crane is getting an awful lot of very difficult work done by the idea of a ‘worldview’, sketched here in only vague terms. It used to be easy to get away with this kind of vagueness in philosophy of mind, but these days I think there is always a ghostly AI researcher standing at the philosopher’s shoulder and asking how we set about doing the engineering, often a bracing challenge. How do we build a worldview into a robot if it’s not propositional? Some of Crane’s phraseology suggests he might be hoping that the concept of the worldview, with its network nodes with no explicit meaning might translate into modern neural network-based practice. Maybe it could; but even if it does, that surely won’t do for philosophers. The AI tribe will be happy if the robot works; but the philosophers will still want to know exactly how this worldview gizmo does its thing. We don’t know, but we know the worldview is already somehow a representation of the world. You could argue that while Crane set out to account for the intentionality of our beliefs, that is in the event the exact thing that he ends up not explaining at all.

There are some problems about resting on dispositions, too. Barking at a tree because I believe there’s a cat up there is one thing; my beliefs about metaphysics, by contrast, seem very remote from any simple behavioural dispositions of that kind. I suppose they would have to be conditional dispositions to utter or write certain kinds of words in the context of certain discussions. It’s a little hard to think that when I’m doing philosophy what  I’m really doing is modelling some of my own particularly esoteric pre-existing authorial dispositions. And what dispositions would they be? I think they would have to be something like dispositions to write down propositions like ‘nominalism is false’ – but didn’t we start off down this path because we were uncomfortable with the idea that the content of beliefs is propositional?

Moreover, Crane wants to say that our beliefs are preserved while we are asleep because we still have the relevant dispositions. Aren’t our beliefs similarly preserved when we’re dead? It would seem odd to say that Abraham Lincoln did not believe slavery should be abolished while he was asleep, certainly, but it would seem equally odd to say he stopped believing it when he died. But does he still have dispositions to speak in certain ways? If we insist on this line it seems the only way to make it intelligible is to fall back on counterfactuals (if he were still alive Lincoln would still be disposed to say that it was right to abolish slavery…) but counterfactuals notoriously bring a whole library of problems with them.

I’d also sort of like to avoid paring down the role of the conscious. I don’t think I’m quite ready to pack all belief away into the attic of the unconscious. Still, though Crane’s account may have its less appealing spots I do rather like the idea of a holistic worldview as the central bearer of belief.

heterogeneous-ontologyIs downward causation the answer? Does it explain how consciousness can be a real and important part of the world without being reducible to physics? Sean Carroll had a sensible discussion of the subject recently.

What does ‘down’ even mean here? The idea rests on the observation that the world operates on many distinct levels of description. Fluidity is not a property of individual molecules but something that ’emerges’ when certain groups of them get together. Cells together make up organisms that in turn produce ecosystems. Often enough these levels of description deal with progressively larger or smaller entities, and we typically refer to the levels that deal with larger entities as higher, though we should be careful about assuming there is one coherent set of levels of description that fit into one another like Russian dolls.

Usually we think that reality lives on the lowest level, in physics. Somewhere down there is where the real motors of the universe are driving things. Let’s say this is the level of particles, though probably it is actually about some set of entities in quantum mechanics, string theory, or whatever set of ideas eventually proves to be correct. There’s something in this view because it’s down here at the bottom that the sums really work and give precise answers, while at higher levels of description the definitions are more approximate and things tend to be more messy and statistical.

Now consciousness is quite a high-level business. Particles make proteins that make cells that make brains that generate thoughts. So one reductionist point of view would be that really the truth is the story about particles: that’s where the course of events is really decided, and the mental experiences and decisions we think are going on in consciousness are delusions, or at best a kind of poetic approximation.

It’s not really true, however, that the entities dealt with at higher levels of description are not real. Fluidity is a perfectly real phenomenon, after all. For that matter the Olympics were real, and cannot be discussed in terms of elementary particles. What if our thoughts were real and also causally effective at lower levels of description? We find it easy to think that the motion of molecules ’caused’ the motion of the football they compose, but what if it also worked the other way? Then consciousness could be real and effectual within the framework of a sufficiently flexible version of physics.

Carroll doesn’t think that really washes, and I think he’s right. It’s a mistake to think that relations between different levels of description are causal. It isn’t that my putting the beef and potatoes on the table caused lunch to be served; they’re the same thing described differently. Now perhaps we might allow ourselves a sense in which things cause themselves, but that would be a strange and unusual sense, quite different from the normal sense in which cause and effect by definition operate over time.

So real downward causality, no: if by talk of downward causality people only mean that real effectual mental events can co-exist with the particle story but on a different level of description, that point is sound but misleadingly described.

The thing that continues to worry me slightly is the question of why the world is so messily heterogeneous in its ontology – why it needs such a profusion of levels of description in order to discuss all the entities of interest. I suppose one possibility is that we’re just not looking at things correctly. When we look for grand unifying theories we tend to look to ever lower levels of description and to the conjectured origins of the world. Perhaps that’s the wrong approach and we should instead be looking for the unimaginable mental perspective that reconciles all levels of description.

Or, and I think this might be closer to it, the fact that there are more things in heaven and earth than are dreamt of in anyone’s philosophy is actually connected with the obscure reason for there being anything. As the world gets larger it gets, ipso facto, more complex and reduction and backward extrapolation get ever more hopeless. Perhaps that is in some sense just as well.

(The picture is actually a children’s puzzle from 1921 – any solutions? You need to know it is called ‘Illustrated Central Acrostic’)  


age-smHow far back in time do you recognise yourself? There may be long self-life and short self-life people; speculatively, the difference may even be genetic.

Some interesting videos here on the question of selves and persons (two words often used by different people to indicate different distinctions, so you can have a long talk at cross-purposes all too easily).

Too much content for me to summarise quickly, but I was particularly struck by Galen Strawson’s view of self-life (as it were). Human beings may live three score and ten years, but the unchanged self really only lasts a short while. Rigorously speaking he thinks it might only last a fraction of a second, but he believes that there are, as it were, different personality types here; people who have either a long or a short sense of identity over time. He is apparently near one end of the spectrum, not really identifying with the Galen Strawson who was here only half an hour ago. Myself, I think I’m towards the other end. When I look at photographs of my five-year-old self, I feel it’s me. There are many differences, of course, but I remember with special empathy what it was like to look out through those eyes.

Strawson thinks this is a genuine difference, not yet sufficiently studied by psychology; perhaps it even has a genetic basis. But he thinks short self-life and long self-life people can get along perfectly well; in fact the combination may make a strong partnership.

One other interesting point, Raymond Tallis thinks personhood is strongly social. On a desert island your personhood would gradually attenuate until you became more or less ‘Humean’ and absorbed in your environment and daily island tasks. It doesn’t sound altogether bad…

sleepOUP Blog has a sort of preview by Bruntrup and Jaskolla of their forthcoming collection on panpsychism, due out in December, with a video of David Chalmers at the end: they sort of credit him with bringing panpsychist thought into the mainstream. I’m using ‘panpsychism’ here as a general term, by the way, covering any view that says consciousness is present in everything, though most advocates really mean that consciousness or experience is everywhere, not souls as the word originally implied.

I found the piece interesting because they put forward two basic arguments for panpsychism, both a little different from the desire for simplification which I’ve always thought was behind it – although it may come down to the same basic ideas in the end.

The first argument they suggest is that ‘nothing comes of nothing’; that consciousness could not have sprung out of nowhere, but must have been there all along in some form. In this bald form, it seems to me that the argument is virtually untenable. The original Scholastic argument that nothing comes of nothing was, I think, a cosmological argument. In that form it works. If there really were nothing, how could the Universe get started? Nothing happens without a cause, and if there were nothing, there could be no causes.  But within an existing Universe, there’s no particular reason why new composite or transformed entities cannot come into existence.  The thing that causes a new entity need not be of the same kind as that entity; and in fact we know plenty of new things that once did not exist but do now; life, football, blogs.

So to make this argument work there would have to be some reason to think that consciousness was special in some way, a way that meant it could not arise out of unconsciousness. But that defies common sense, because consciousness coming out of unconsciousness is something we all experience every day when we wake up; and if it couldn’t happen, none of us would be here as conscious beings at all because we couldn’t have been born., or at least, could never have become aware.

Bruntrup and Jaskolla mention arguments from Nagel and William James;  Nagel’s, I think rests on an implausible denial of emergentism; that is, he denies that a composite entity can have any interesting properties that were not present in the parts. The argument in William James is that evolution could not have conferred some radically new property and that therefore some ‘mind dust’ must have been present all the way back to the elementary particles that made the world.

I don’t find either contention at all appealing, so I may not be presenting them in their best light; the basic idea, I think is that consciousness is just a different realm or domain which could not arise from the physical. Although individual consciousnesses may come and go, consciousness itself is constant and must be universal. Even if we go some way with this argument I’d still rather say that the concept of position does not apply to consciousness than say it must be everywhere.

The second major argument is one from intrinsic nature. We start by noticing that physics deals only with the properties of things, not with the ‘thing in itself’. If you accept that there is a ‘thing in itself’ apart from the collection of properties that give it its measurable characteristics, then you may be inclined to distinguish between its interior reality and its external properties. The claim then is that this interior reality is consciousness. The world is really made of little motes of awareness.

This claim is strangely unmotivated in my view. Why shouldn’t the interior reality just be the interior reality, with nothing more to be said about it? If it does have some other character it seems to me as likely to be cabbagey as conscious. Really it seems to me that only someone who was pretty desperately seeking consciousness would expect to find it naturally in the ding an sich.  The truth seems to be that since the interior reality of things is inaccessible to us, and has no impact on any of the things that are accessible, it’s a classic waste of time talking about it.

Aha, but there is one exception; our own interior reality is accessible to us, and that, it is claimed, is exactly the mysterious consciousness we seek. Now, moreover, you see why it makes sense to think that all examples of this interiority are conscious – ours is! The trouble is, our consciousness is clearly related to the functioning of our brain. If it were just the inherent inner property of that brain, or of our body, it would never go away, and unconsciousness would be impossible. How can panpsychists sleep at night? If panpsychism is true, even a dead brain has the kind of interior awareness that the theory ascribes to everything. In other words, my human consciousness is a quite different thing from the panpsychist consciousness everywhere; somehow in my brain the two sit alongside without troubling each other. My consciousness tells us nothing about the interiority of objects, nor vice versa: and my consciousness is as hard to explain as ever.

Maybe the new book will have surprising new arguments? I doubt it, but perhaps I’ll put it on my Christmas present list.