heterogeneous-ontologyIs downward causation the answer? Does it explain how consciousness can be a real and important part of the world without being reducible to physics? Sean Carroll had a sensible discussion of the subject recently.

What does ‘down’ even mean here? The idea rests on the observation that the world operates on many distinct levels of description. Fluidity is not a property of individual molecules but something that ’emerges’ when certain groups of them get together. Cells together make up organisms that in turn produce ecosystems. Often enough these levels of description deal with progressively larger or smaller entities, and we typically refer to the levels that deal with larger entities as higher, though we should be careful about assuming there is one coherent set of levels of description that fit into one another like Russian dolls.

Usually we think that reality lives on the lowest level, in physics. Somewhere down there is where the real motors of the universe are driving things. Let’s say this is the level of particles, though probably it is actually about some set of entities in quantum mechanics, string theory, or whatever set of ideas eventually proves to be correct. There’s something in this view because it’s down here at the bottom that the sums really work and give precise answers, while at higher levels of description the definitions are more approximate and things tend to be more messy and statistical.

Now consciousness is quite a high-level business. Particles make proteins that make cells that make brains that generate thoughts. So one reductionist point of view would be that really the truth is the story about particles: that’s where the course of events is really decided, and the mental experiences and decisions we think are going on in consciousness are delusions, or at best a kind of poetic approximation.

It’s not really true, however, that the entities dealt with at higher levels of description are not real. Fluidity is a perfectly real phenomenon, after all. For that matter the Olympics were real, and cannot be discussed in terms of elementary particles. What if our thoughts were real and also causally effective at lower levels of description? We find it easy to think that the motion of molecules ’caused’ the motion of the football they compose, but what if it also worked the other way? Then consciousness could be real and effectual within the framework of a sufficiently flexible version of physics.

Carroll doesn’t think that really washes, and I think he’s right. It’s a mistake to think that relations between different levels of description are causal. It isn’t that my putting the beef and potatoes on the table caused lunch to be served; they’re the same thing described differently. Now perhaps we might allow ourselves a sense in which things cause themselves, but that would be a strange and unusual sense, quite different from the normal sense in which cause and effect by definition operate over time.

So real downward causality, no: if by talk of downward causality people only mean that real effectual mental events can co-exist with the particle story but on a different level of description, that point is sound but misleadingly described.

The thing that continues to worry me slightly is the question of why the world is so messily heterogeneous in its ontology – why it needs such a profusion of levels of description in order to discuss all the entities of interest. I suppose one possibility is that we’re just not looking at things correctly. When we look for grand unifying theories we tend to look to ever lower levels of description and to the conjectured origins of the world. Perhaps that’s the wrong approach and we should instead be looking for the unimaginable mental perspective that reconciles all levels of description.

Or, and I think this might be closer to it, the fact that there are more things in heaven and earth than are dreamt of in anyone’s philosophy is actually connected with the obscure reason for there being anything. As the world gets larger it gets, ipso facto, more complex and reduction and backward extrapolation get ever more hopeless. Perhaps that is in some sense just as well.

(The picture is actually a children’s puzzle from 1921 – any solutions? You need to know it is called ‘Illustrated Central Acrostic’)  

 

age-smHow far back in time do you recognise yourself? There may be long self-life and short self-life people; speculatively, the difference may even be genetic.

Some interesting videos here on the question of selves and persons (two words often used by different people to indicate different distinctions, so you can have a long talk at cross-purposes all too easily).

Too much content for me to summarise quickly, but I was particularly struck by Galen Strawson’s view of self-life (as it were). Human beings may live three score and ten years, but the unchanged self really only lasts a short while. Rigorously speaking he thinks it might only last a fraction of a second, but he believes that there are, as it were, different personality types here; people who have either a long or a short sense of identity over time. He is apparently near one end of the spectrum, not really identifying with the Galen Strawson who was here only half an hour ago. Myself, I think I’m towards the other end. When I look at photographs of my five-year-old self, I feel it’s me. There are many differences, of course, but I remember with special empathy what it was like to look out through those eyes.

Strawson thinks this is a genuine difference, not yet sufficiently studied by psychology; perhaps it even has a genetic basis. But he thinks short self-life and long self-life people can get along perfectly well; in fact the combination may make a strong partnership.

One other interesting point, Raymond Tallis thinks personhood is strongly social. On a desert island your personhood would gradually attenuate until you became more or less ‘Humean’ and absorbed in your environment and daily island tasks. It doesn’t sound altogether bad…

sleepOUP Blog has a sort of preview by Bruntrup and Jaskolla of their forthcoming collection on panpsychism, due out in December, with a video of David Chalmers at the end: they sort of credit him with bringing panpsychist thought into the mainstream. I’m using ‘panpsychism’ here as a general term, by the way, covering any view that says consciousness is present in everything, though most advocates really mean that consciousness or experience is everywhere, not souls as the word originally implied.

I found the piece interesting because they put forward two basic arguments for panpsychism, both a little different from the desire for simplification which I’ve always thought was behind it – although it may come down to the same basic ideas in the end.

The first argument they suggest is that ‘nothing comes of nothing’; that consciousness could not have sprung out of nowhere, but must have been there all along in some form. In this bald form, it seems to me that the argument is virtually untenable. The original Scholastic argument that nothing comes of nothing was, I think, a cosmological argument. In that form it works. If there really were nothing, how could the Universe get started? Nothing happens without a cause, and if there were nothing, there could be no causes.  But within an existing Universe, there’s no particular reason why new composite or transformed entities cannot come into existence.  The thing that causes a new entity need not be of the same kind as that entity; and in fact we know plenty of new things that once did not exist but do now; life, football, blogs.

So to make this argument work there would have to be some reason to think that consciousness was special in some way, a way that meant it could not arise out of unconsciousness. But that defies common sense, because consciousness coming out of unconsciousness is something we all experience every day when we wake up; and if it couldn’t happen, none of us would be here as conscious beings at all because we couldn’t have been born., or at least, could never have become aware.

Bruntrup and Jaskolla mention arguments from Nagel and William James;  Nagel’s, I think rests on an implausible denial of emergentism; that is, he denies that a composite entity can have any interesting properties that were not present in the parts. The argument in William James is that evolution could not have conferred some radically new property and that therefore some ‘mind dust’ must have been present all the way back to the elementary particles that made the world.

I don’t find either contention at all appealing, so I may not be presenting them in their best light; the basic idea, I think is that consciousness is just a different realm or domain which could not arise from the physical. Although individual consciousnesses may come and go, consciousness itself is constant and must be universal. Even if we go some way with this argument I’d still rather say that the concept of position does not apply to consciousness than say it must be everywhere.

The second major argument is one from intrinsic nature. We start by noticing that physics deals only with the properties of things, not with the ‘thing in itself’. If you accept that there is a ‘thing in itself’ apart from the collection of properties that give it its measurable characteristics, then you may be inclined to distinguish between its interior reality and its external properties. The claim then is that this interior reality is consciousness. The world is really made of little motes of awareness.

This claim is strangely unmotivated in my view. Why shouldn’t the interior reality just be the interior reality, with nothing more to be said about it? If it does have some other character it seems to me as likely to be cabbagey as conscious. Really it seems to me that only someone who was pretty desperately seeking consciousness would expect to find it naturally in the ding an sich.  The truth seems to be that since the interior reality of things is inaccessible to us, and has no impact on any of the things that are accessible, it’s a classic waste of time talking about it.

Aha, but there is one exception; our own interior reality is accessible to us, and that, it is claimed, is exactly the mysterious consciousness we seek. Now, moreover, you see why it makes sense to think that all examples of this interiority are conscious – ours is! The trouble is, our consciousness is clearly related to the functioning of our brain. If it were just the inherent inner property of that brain, or of our body, it would never go away, and unconsciousness would be impossible. How can panpsychists sleep at night? If panpsychism is true, even a dead brain has the kind of interior awareness that the theory ascribes to everything. In other words, my human consciousness is a quite different thing from the panpsychist consciousness everywhere; somehow in my brain the two sit alongside without troubling each other. My consciousness tells us nothing about the interiority of objects, nor vice versa: and my consciousness is as hard to explain as ever.

Maybe the new book will have surprising new arguments? I doubt it, but perhaps I’ll put it on my Christmas present list.

Some deep and heavy  philosophy in another IAI video: After the End of Truth. This one got a substantial and in places rather odd discussion on Reddit recently.

Watch more videos on iai.tv

Hilary Lawson starts with the premise that Wittgenstein and others have sufficiently demonstrated that there is no way of establishing objective truth; but we can’t, he says, rest there. He thinks that we can get a practical way forward if we note that the world is open but we can close it in various ways and some of them work better for us than others.

Perhaps an analogy might be (as it happens) the ideas of truth and falsity themselves in formal logic. Classical logic assigns only two values to propositions; true or not true. People often feel this is unintuitive. We can certainly devise formal logics with more than two values – we could add one for ‘undetermined’, say. This is not a matter of what’s right or wrong; we can carve up our system of logic any way we like. The thing is, two-valued logic just gives us a lot more results than its rivals. One important reason is that if we can exclude a premise, in a two-value system its negation must be true; that move doesn’t work if there are three or more values). So it’s not that two-valued logic is true and the others are false, it’s just that doing it the two-valued way gets us more. Perhaps something similar might be true of the different way we might carve up the world.

John Searle, by video (and briefly doing that thing old people seem mysteriously prone to; sinking to the bottom of the frame as though peering over a wall) goes for common sense, as ever, albeit cloaked in technical terms. He distinguishes between epistemic and ontological senses of objectivity. Our views are unavoidably ontologically subjective to some degree (ie, different people have different views:  ‘perspectivalism’ is true); but that does not at all entail that epistemic objectivity is unattainable; indeed, if we didn’t assume some objective truths we couldn’t get started on discussion. That’s a robust refutation of the view that perspectivalism implies no objective truth, though I’m not sure that’s quite the case Lawson was making.  Perhaps we could argue that after all, there are such things as working assumptions; to say ‘let’s treat this as true and see where we get’ does not necessarily require belief in objectively determinable truth.

Hannah Dawson seems to argue emphatically on both sides; no two members of a class gave the same account of an assembly (though I bet they could all agree that no pink elephant walked in half-way through). It seems the idea of objective truth sits uneasily in history;  but no-one can deny the objective factuality of the Holocaust; sometimes, after all, reality does push back. This may be an expression of the curious point that it often seems easier to say that nothing is objectively true than it is to say that nothing is objectively false, illogical as that is.

Dawson’s basic argument looks to me a bit like an example of ‘panic scepticism’; no perfect objective account of an historical event is possible, therefore nothing at all is objectively true. I think we get this kind of thing in philosophy of mind too; people seem to argue that our senses mislead us sometimes, therefore we have no knowledge of external reality (there are better arguments for similar conclusions, of course). Maybe after all we can find ways to make do with imperfect knowledge.

neural-netInteresting piece here reviewing the way some modern machine learning systems are unfathomable. This is because they learn how to do what they do, rather than being set up with a program, so there is no reassuring algorithm – no set of instructions that tells us how they work. In fact they way they make their decisions may be impossible to grasp properly even if we know all the details because it just exceeds in brute complexity what we can ever get our minds around.

This is not really new. Neural nets that learn for themselves have always been a bit inscrutable. One problem with this is brittleness: when the system fails it may not fail in ways that are safe and manageable, but disastrously. This old problem is becoming more important mainly because new approaches to deep machine learning are doing so well; all of a sudden we seem to be getting a rush of new systems that really work effectively at quite complex real world tasks. The problems are no longer academic.

Brittle behaviour may come about when the system learns its task from a limited data set. It does not understand the data and is simply good at picking out correlations, so sometimes it may pick out features of the original data set that work well within that set, and perhaps even work well on quite a lot of new real world data, but don’t really capture what’s important. The program is meant to check whether a station platform is dangerously full of people, for example; in the set of pictures provided it finds that all it needs to do is examine the white platform area and check how dark it is. The more people there are, the darker it looks. This turns out to work quite well in real life, too. Then summer comes and people start wearing light coloured clothes…

There are ways to cope with this. We could build in various safeguards. We could make sure we use big and realistic datasets for training or perhaps allow learning to continue in real world contexts. Or we could just decide never to use a system that doesn’t have an algorithm we can examine; but there would be a price to pay in terms of efficiency for that; it might even be that we would have to give up on certain things that can only be effectively automated with relatively sophisticated deep learning methods. We’re told that the EU contemplates a law embodying a right to explanations of how software works. To philosophers I think this must sound like a marvellous new gravy train, as there will obviously be a need to adjudicate what counts as an adequate explanation, a notoriously problematic issue. (I am available as a witness in any litigation for reasonable hourly fees.)

The article points out that the incomprehensibility of neural network-based systems is in some ways really quite like the incomprehensibility of the good old human brain. Why wouldn’t it be? After all, neural nets were based on the brain. Now it’s true that even in the beginning they were very rough approximations of real neurology and in practical modern systems the neural qualities of neural nets are little more than a polite fiction. Still, perhaps there are properties shared by all learning systems?

One reason deep learning may run into problems is the difficulty AI always has in dealing with relevance.  The ability to spot relevance no doubt helps the human brain check whether it is learning about the right kind of thing, but it has always been difficult to work out quite how our brains do it, and this might mean an essential element is missing from AI approaches.

It is tempting, though, to think that this is in part another manifestation of the fact that AI systems get trained on limited data sets. Maybe the radical answer is to stop feeding them tailored data sets and let  our robots live in the real world; in other words, if we want reliable deep learning perhaps our robots have to roam free and replicate the wider human experience of the world at large? To date the project of creating human-style cognition has been in some sense motivated by mere curiosity (and yes, by the feeling that it would be pretty cool to have a robot pal) ; are we seeing here the outline of an argument that human-style AGI might actually be the answer to genuine engineering problems?

What about those explanations? Instead of retaining philosophers and lawyers to argue the case, could we think about building in a new module to our systems, one that keeps overall track of the AI and can report the broad currents of activity within it? It wouldn’t be perfect but it might give us broad clues as to why the system was making the decisions it was, and even allow us to delicately feed in some guidance. Doesn’t such a module start to sound like, well, consciousness? Could it be that we are beginning to see the outline of the rationales behind some of God’s design choices?

twinsCan we solve the Hard Problem with scanners? This article by Brit Brogaard and Dimitria E. Gatzia argues that recent advances in neuroimaging techniques, combined with the architectonic approach advocated by Fingelkurts and Fingelkurts, open the way to real advances.

But surely it’s impossible for physical techniques to shed any light on the Hard Problem? The whole point is that it is over and above any account which could be given by physics. In the Zombie Twin though experiment I have a physically identical twin who has no subjective experience. His brain handles information just the way mine does, but when he registers the colour red, it’s just data; he doesn’t experience real redness. If you think that is conceivable, then you believe in qualia, the subjective extra part of experience. But how could qualia be explained by neuroimaging; my zombie twin’s scans are exactly the same as mine, yet he has no qualia at all?

This, I think, is where the architectonics come in. The foundational axiom of the approach, as I understand it, is that the functional structure of phenomenal experience corresponds to dynamic structure within brain activity; the operational architectonics provide the bridge . (I call it an axiom, but I think the Fingelkurts twins would say that empirical research already provides support for a nested hierarchical structure which bridges the explanatory gap. They seem to take the view that operational architectonics uses a structured electrical field, which on the one hand links their view with the theories of Johnjoe McFadden and Sue Pockett, while on the other making me wonder whether advances in neuroimaging are relevant if the exciting stuff is happening outside the neurons.) It follows that investigating dynamic activity structures in the brain can tell us about the structure of phenomenal, subjective experience. That seems reasonable. After all, we might argue, qualia may be mysterious, but we know they are related to physical events; the experience of redness goes with the existence of red things in the physical world (with due allowance for complications). Why can’t we assume that subjective experience also goes with certain structured kinds of brain activity?

Two points must be made immediately. The first is that the hunt for Neural Correlates of Consciousness (NCCs) is hardly new. The advocates of architectonics, however, say that approaches along these lines fail because correlation is simply too weak a connection. Noticing that experience x and activation in region y correlate doesn’t really take us anywhere. They aim for something much harder-edged and more specific, with structured features of brain activity matched directly back to structures in an analysis of phenomenal experience (some of the papers use the framework of Revonsuo, though architectonics in general is not committed to any specific approach).

The second point is that this is not a sceptical or reductive project. I think many sceptics about qualia would be more than happy with the idea of exploring subjective experience in relation to brain structure; but someone like Dan Dennett would look to the brain structures to fully explain all the features of experience; to explain them away, in fact, so that it was clear that brain activity was in the end all we were dealing with and we could stop talking about ‘nonsensical’ qualia altogether.

By contrast the architectonic approach allows philosophers to retain the ultimate mystery; it just seeks to push the boundaries of science a bit further out into the territory of subjective experience. Perhaps Paul Churchland’s interesting paper about chimerical colours which we discussed a while ago provides a comparable case if not strictly an example.

Churchland points out that we can find the colours we experience mapped out in the neuronal structures of the brain; but interestingly the colour space defined in the brain is slightly more comprehensive than the one we actually encounter in real life. Our brains have reserved spaces for colours that do not exist, as it were. However, using a technique he describes we can experience these ‘chimerical’ colours, such as ‘dark yellow’ in the form of an afterglow. So here you experience for the first time a dark yellow quale, as predicted and delivered by neurology. Churchland would argue this shows rather convincingly that position in your brain’s colour space is essentially all there is to the subjective experience of colour. I think a follower of architectonics would commend the research for elucidating structural features of experience but hold that there was still a residual mystery about what dark yellow qualia really are in themselves, one that can only be addressed by philosophy.

It all seems like a clever and promising take on the subject to me; I do have two reservations. The first is a pessimistic doubt about whether it will ever really be possible to deliver much. The sort of finding reported by Churchland is the exception more than the rule. Vision and hearing offer some unusual scope because they both depend on wave media which impose certain interesting structural qualities; the orderly spectrum and musical scale. Imaginatively I find it hard to think of other aspects of phenomenal experience that seem to be good candidates for structural analysis. I could be radically wrong about this and I hope I am.

The other thing is, I still find it a bit hard to get past my zombie twin; if phenomenal experience matches up with the structure of brain activity perfectly, how come he is without qualia? The sceptics and the qualophiles both have pretty clear answers; either there just are no qualia anyway or they are outside the scope of physics. Now if we take the architectonic view, we could argue that just as the presence of red objects is not sufficient for there to be red qualia, so perhaps the existence of the right brain patterns isn’t sufficient either; though the red objects and the relevant brain activity do a lot to explain the experience. But if the right brain activity isn’t sufficient, what’s the missing ingredient? It feels (I put it no higher) as if there ought to be an explanation; but perhaps that’s just where we leave the job for the philosophers?

doorknob‘…stupid as a doorknob…’ Just part of Luboš Motl’s vigorous attack on Scott Aaronson’s critique of IIT, the Integrated Information Theory of Giulio Tononi.

To begin at the beginning. IIT says that consciousness arises from integrated information, and proposes a mathematical approach to quantifying the level of integrated information in a system, a quantity it names Phi (actually there are several variant ways to define Phi that differ in various details, which is perhaps unfortunate). Aaronson and Motl both describe this idea as a worthy effort but both have various reservations about it – though Aaronson thinks the problems are fatal while Motl thinks IIT offers a promising direction for further work.

Both pieces contain a lot of interesting side discussion, including Aaronson’s speculation that approximating phi for a real brain might be an NP-hard problem. This is the digression that prompted the doorknob comment: so what if it were NP-hard, demands Motl; you think nature is barred from containing NP-hard problems?

The real crux as I understand it is Aaronson’s argument that we can give examples of systems with high scores for Phi that we know intuitively could not be conscious. Eric Schwitzgebel has given a somewhat similar argument but cast in more approachable terms; Aaronson uses a Vandermonde matrix for his example of a high-phi but intuitively non-conscious entity, whereas Schwitzgebel uses the United States.

Motl takes exception to Aaronson’s use of intuition here. How does he know that his matrix lacks consciousness? If Aaronson’s intuition is the test, what’s the point of having a theory? The whole point of a theory is to improve on and correct our intuitive judgements, isn’t it? If we’re going to fall back on our intuitions argument is pointless.

I think appeals to intuition are rare in physics, where it is probably natural to regard them as illegitimate, but they’re not that unusual in philosophy, especially in ethics. You could argue that G.E. Moore’s approach was essentially to give up on ethical theory and rely on intuition instead. Often intuition limits what we regard as acceptable theorising, but our theories can also ‘tutor’ and change our intuitions. My impression is that real world beliefs about death, for example, have changed substantially in recent decades under the influence of utilitarian reasoning; we’re now much less likely to think that death is simply forbidden and more likely to accept calculations about the value of lives. We still, however, rule out as unintuitive (‘just obviously wrong’) such utilitarian conclusions as the propriety of sometimes punishing the innocent.

There’s an interesting question as to whether there actually is, in itself, such a thing as intuition. Myself I’d suggest the word covers any appealing pre-rational thought; we use it in several ways. One is indeed to test our conclusions where no other means is available; it’s interesting that Motl actually remarks that the absence of a reliable objective test of consciousness is one of IIT’s problems; he obviously does not accept that intuition could be a fall-back, so he is presumably left with the gap (which must surely affect all theories of consciousness). Philosophers also use an appeal to intuition to help cut to the chase, by implicitly invoking shared axioms and assumptions; and often enough ‘thought experiments’ which are not really experiments at all but in the Dennettian phrase ‘intuition pumps’ are used for persuasive effect; they’re not proofs but they may help to get people to agree.

Now as a matter of fact I think in Aaronson’s case we can actually supply a partial argument to replace pure intuition. In this discussion we are mainly talking about subjective consciousness, the ‘something it is like’ to experience things. But I think many people would argue that that Hard Problem consciousness requires the Easy Problem kind to be in place first as a basis. Subjective experience, we might argue, requires the less mysterious apparatus of normal sensory or cognitive experience; and Aaronson (or Schwitzgebel) could argue that their example structures definitely don’t have the sort of structure needed for that, a conclusion we can reach through functional argument without the need for intuition,

Not everybody would agree, though; some, especially those who lean towards panpsychism and related theories of ‘consciousness everywhere’ might see nothing wrong with the idea of subjective consciousness without the ‘mechanical’ kind. The standard philosophical zombie has Easy Problem consciousness without qualia; these people would accept an inverted zombie who has qualia with no brain function. It seems a bit odd to me to pair such a view with IIT (if you don’t think functional properties are required I’d have thought you would think that integrating information was also dispensable) but there’s nothing strictly illogical about it. Perhaps the dispute over intuition really masks a different disagreement, over the plausibility of such inverted zombies, obviously impossible in  Aaronson’s eyes, but potentially viable in Motl’s?

Motl goes on to offer what I think is a rather good objection to IIT as it stands; ie that it seems to award consciousness to ‘frozen’ or static structures if they have a high enough Phi score. He thinks it’s necessary to reformulate the idea to capture the point that consciousness is a process. I agree – but how does Motl know consciousness requires a process? Could it be that it’s just…  intuitively obvious?

What is experience? An interesting discussion from the Institute of Art and Ideas, featuring David Chalmers, Susana Martinez-Conde and Peter Hacker.

Chalmers seems to content himself with restating the Hard Problem; that is, that there seems to be something in experience which is mysteriously over and above the account given by physics. He seems rather nervous, but I think it’s just the slight awkwardness typical of a philosopher being asked slightly left-field questions.

Martinez-Conde tells us we never really experience reality, only a neural simulation of it. I think it’s a mistake to assume that because experience seems to be mediated by our sensory systems, and sometimes misleads us, it never shows us external reality. That’s akin to thinking that because some books are fiction no book really addresses reality.

Hacker smoothly dismisses the whole business as a matter of linguistic and conceptual confusion. Physics explains its own domain, but we shouldn’t expect it to deal with experience, any more than we expect it to explain love, or the football league. He is allowed to make a clean get-away with this neat proposition, although we know, for example, that physical electrodes in the brain can generate and control experiences; and we know that various illusions and features of experience have very good physiological explanations. Hacker makes it seem that there is a whole range of domains, each with its own sealed off world of explanation; but surely love, football and the others are just sub-domains of the mental realm? Though we don’t yet know how this works there is plenty of evidence that the mental domain is at least causally dependent on physics, if not reducible to it. That’s what the discussion is all about. We can imagine Hacker a few centuries ago assuring us loftily that the idea of applying ordinary physics to celestial mechanics was a naive category error. If only Galileo had read up on his Oxford philosophy he would realise that the attempt to explain the motion of the planets in terms of physical forces was doomed to end in unresolvable linguistic bewitchment!

I plan to feature more of these discussion videos as a bit of a supplement to the usual menu here, by the way.

Edward WittenWe’ll never understand consciousness, says Edward Witten. Ashutosh Jogalekar’s post here features a video of the eminent physicist talking about fundamentals; the bit about consciousness starts around 1:10 if you’re not interested in string theory and cosmology. John Horgan has also weighed in with some comments; Witten’s view is congenial to him because of his belief that science may be approaching an end state in which many big issues are basically settled while others remain permanently mysterious. Witten himself thinks we might possibly get a “final theory” of physics (maybe even a form of string theory), but guesses that it would be of a tricky kind, so that understanding and exploring the theory would itself be an endless project, rather the way number theory, which looks like a simple subject at first glance, proves to be capable of endless further research.

Witten, in response to a slightly weird question from the interviewer, declines to define consciousness, saying he prefers to leave it undefined like one of the undefined terms set out at the beginning of a maths book. He feels confident that the workings of the mind will be greatly clarified by ongoing research so that we will come to understand much better how the mechanisms operate. But why these processes are accompanied by something like consciousness seems likely to remain a mystery; no extension of physics that he can imagine seems likely to do the job, including the kind of new quantum mechanics that Roger Penrose believes is needed.

Witten is merely recording his intuitions, so we shouldn’t try to represent him as committed to any strong theoretical position; but his words clearly suggest that he is an optimist on the so-called Easy Problem and a pessimist on the Hard one. The problem he thinks may be unsolvable is the one about why there is “something it is like” to have experiences; what it is that seeing a red rose has over and above the acquisition of mere data.

If so, I think his incredulity joins a long tradition of those who feel intuitively that that kind of consciousness just is radically different from anything explained or explainable by physics. Horgan mentions the Mysterians, notably Colin McGinn, who holds that our brain just isn’t adapted to understanding how subjective experience and the physical world can be reconciled; but we could also invoke Brentano’s contention that mental intentionality is just utterly unlike any physical phenomenon; and even trace the same intuition back to Leibniz’s famous analogy of the mill; no matter what wheels and levers you put in your machine, there’s never going to be anything that could explain a perception (particularly telling given Leibniz’s enthusiasm for calculating machines and his belief that one day thinkers could use them to resolve complex disputes). Indeed, couldn’t we argue that contemporary consciousness sceptics like Dennett and the Churchlands also see an unbridgeable gap between physics and subjective, qualia-having consciousness? The difference is simply that in their eyes this makes that kind of consciousness nonsense, not a mystery.

We have to be a bit wary of trusting our intuitions. The idea that subjective consciousness arises when we’ve got enough neurons firing may sound like the idea that wine comes about when we’ve added enough water to the jar; but the idea that enough ones and zeroes in data registers could ever give rise to a decent game of chess looks pretty strange too.

As those who’ve read earlier posts may know, I think the missing ingredient is simply reality. The extra thing about consciousness that the theory of physics fails to include is just the reality of the experience, the one thing a theory can never include. Of course, the nature of reality is itself a considerable mystery, it just isn’t the one people have thought they were talking about. If I’m right, then Witten’s doubts are well-founded but less worrying than they may seem. If some future genius succeeds in generating an artificial brain with human-style mental functions, then by looking at its structure we’ll only ever see solutions to the Easy Problem, just as we may do in part when looking at normal biological brains. Once we switch on the artificial brain and it starts doing real things, then experience will happen.

mountaineerFree solo style climbers need their heads examined. That seems to be the premise of the investigation reported here. Alex Honnold does amazingly scary things in his solo climbs, all without ropes or any kind of effective protection. Just watching, or just looking at pictures, is enough to make most of us shudder; a neurobiologist came to the conclusion that Honnold’s amygdala wasn’t working.

Why would he think that? The amygdala, or amygdalae, are two small organs within the brain that are generally considered to have a role in producing fear and aversion. A friend of mine once suggested they could be renamed after the moons of Mars as Phobos and Deimos – ‘Fear’ and ‘Loathing’ in Greek. In fact that wouldn’t be at all accurate, not least because the left amygdala seems to produce positive emotional reactions as well as negative ones. The broad initial analysis of Honnold’s behaviour seems to have been that his rational cortex was getting him into perilous situations because his amygdala was failing to wave the red flag. In some ways that seems odd: I think my rational, future-planning cortex would keep me the hell away from anything like the cliff faces Honnold climbs, while it might be the emotional thrill-enjoying parts of my brain that impelled me towards them.

A scan revealed that Honnold’s amygdalae were both present and correct, without any signs of damage; however, they didn’t seem to respond to various scary or unpleasant pictures in the way a normal person’s would. This knocks out one strong version of the theory. If there had been visible lesions in Honnold’s amygdalae, there would have been strong reason to suspect that his behaviour stemmed from that damage; but we knew already that he isn’t as scared as the rest of us, so finding that his amygdalae react less than most merely gives us another version of the finding that mental differences are associated with brain differences and vice versa. We sort of knew that; if that’s all we’ve found out we’re sailing dangerously close to the sea of neurobollocks and scannamania.

It is possible to do without amygdalae altogether. SM is a patient reported on by Damasio and others, who lost both amygdalae as a result of Urbach-Wiethe disease. She did not take up free solo style climbing or other dangerous sports, but she shows a distinct lack of fearful and aversive reactions to strange people and other triggers of fear and distrust. She has suffered a number of violent encounters which might partly have been the result of the lack of fear which allowed her, for example, to walk through dubious parks at night; but it may also arguably have got her out of some dangerous situations through her panic-free Spock-like calm and non-hostile responses. It seems she lives in an area where violent crime is common in any case, and she has succeeded in bringing up three children independently.

It could be that amygdalae function somewhat differently in men and women, which might explain why Honnold’s supposed problem results in dangerous activity while SM’s mainly lead her to hug and trust strangers. There are known differences in the pattern of development; female amygdalae develop fully earlier, while male ones go on growing longer and end up bigger. Those differences might simply reflect general differences in growth pattern, though there is also some evidence of different patterns of activation; it’s possible, for example, that the activation of female amygdalae tends to promote thought while the male equivalent promotes action. All the usual caveats apply and great caution is in order. Let’s also remember that SM and Honnold are both one-off cases; that SM suffered damage to other parts of her brain – and that Honnold says he does feel fear, and that his amygdalae appear to be perfectly normal.

Are they though? The research showed an almost total lack of response to pictures of terrible injuries and other things that would normally be expected to evoke a strong reaction from the amygdala. So perhaps there is something abnormal going on after all? Maybe there is damage too subtle to detect? Or maybe something is suppressing the amygdala?

The identification and handling of threats by the brain is actually a complicated business. Many quite low-level systems as well as highly-sophisticated ones can make a contribution (a sudden loud growl can cause a wave of fear; so can a few quiet words from a doctor).  The role of the amygdala seems to be as much to do with memory as fear; it pays attention to things that we have found are associated with really bad (or sometimes good) experiences and helps direct our attention to the right things, reminding us to look at people’s eyes when we want to assess whether they are frightened, for example.  The interplay may be very complex, but even on a pretty crude interpretation there might be conscious processes that sometimes shut the amygdala down:

Visual:  furry, claws, animate, ursine: yup, over 99% positive that’s a bear. Hey, amygdala, big animal for you?

Amygdala: OMFG run for our life!

Cortex: guys, this is a zoo, there are bars – Visual, confirm stout bars – OK. Amygdala, STFU.

It doesn’t always work like that, of course. Cortex knows that we can happily walk along a narrow plank no wider than the terrifying Thank God Ledge if it is a few centimetres off the ground, but saying so repeatedly will not stop amygdala sounding the alarm.

Ultimately it may be that Honnold’s different behaviour and different amygdala activation are simply two facets of his different personality.