sleepOUP Blog has a sort of preview by Bruntrup and Jaskolla of their forthcoming collection on panpsychism, due out in December, with a video of David Chalmers at the end: they sort of credit him with bringing panpsychist thought into the mainstream. I’m using ‘panpsychism’ here as a general term, by the way, covering any view that says consciousness is present in everything, though most advocates really mean that consciousness or experience is everywhere, not souls as the word originally implied.

I found the piece interesting because they put forward two basic arguments for panpsychism, both a little different from the desire for simplification which I’ve always thought was behind it – although it may come down to the same basic ideas in the end.

The first argument they suggest is that ‘nothing comes of nothing’; that consciousness could not have sprung out of nowhere, but must have been there all along in some form. In this bald form, it seems to me that the argument is virtually untenable. The original Scholastic argument that nothing comes of nothing was, I think, a cosmological argument. In that form it works. If there really were nothing, how could the Universe get started? Nothing happens without a cause, and if there were nothing, there could be no causes.  But within an existing Universe, there’s no particular reason why new composite or transformed entities cannot come into existence.  The thing that causes a new entity need not be of the same kind as that entity; and in fact we know plenty of new things that once did not exist but do now; life, football, blogs.

So to make this argument work there would have to be some reason to think that consciousness was special in some way, a way that meant it could not arise out of unconsciousness. But that defies common sense, because consciousness coming out of unconsciousness is something we all experience every day when we wake up; and if it couldn’t happen, none of us would be here as conscious beings at all because we couldn’t have been born., or at least, could never have become aware.

Bruntrup and Jaskolla mention arguments from Nagel and William James;  Nagel’s, I think rests on an implausible denial of emergentism; that is, he denies that a composite entity can have any interesting properties that were not present in the parts. The argument in William James is that evolution could not have conferred some radically new property and that therefore some ‘mind dust’ must have been present all the way back to the elementary particles that made the world.

I don’t find either contention at all appealing, so I may not be presenting them in their best light; the basic idea, I think is that consciousness is just a different realm or domain which could not arise from the physical. Although individual consciousnesses may come and go, consciousness itself is constant and must be universal. Even if we go some way with this argument I’d still rather say that the concept of position does not apply to consciousness than say it must be everywhere.

The second major argument is one from intrinsic nature. We start by noticing that physics deals only with the properties of things, not with the ‘thing in itself’. If you accept that there is a ‘thing in itself’ apart from the collection of properties that give it its measurable characteristics, then you may be inclined to distinguish between its interior reality and its external properties. The claim then is that this interior reality is consciousness. The world is really made of little motes of awareness.

This claim is strangely unmotivated in my view. Why shouldn’t the interior reality just be the interior reality, with nothing more to be said about it? If it does have some other character it seems to me as likely to be cabbagey as conscious. Really it seems to me that only someone who was pretty desperately seeking consciousness would expect to find it naturally in the ding an sich.  The truth seems to be that since the interior reality of things is inaccessible to us, and has no impact on any of the things that are accessible, it’s a classic waste of time talking about it.

Aha, but there is one exception; our own interior reality is accessible to us, and that, it is claimed, is exactly the mysterious consciousness we seek. Now, moreover, you see why it makes sense to think that all examples of this interiority are conscious – ours is! The trouble is, our consciousness is clearly related to the functioning of our brain. If it were just the inherent inner property of that brain, or of our body, it would never go away, and unconsciousness would be impossible. How can panpsychists sleep at night? If panpsychism is true, even a dead brain has the kind of interior awareness that the theory ascribes to everything. In other words, my human consciousness is a quite different thing from the panpsychist consciousness everywhere; somehow in my brain the two sit alongside without troubling each other. My consciousness tells us nothing about the interiority of objects, nor vice versa: and my consciousness is as hard to explain as ever.

Maybe the new book will have surprising new arguments? I doubt it, but perhaps I’ll put it on my Christmas present list.

Some deep and heavy  philosophy in another IAI video: After the End of Truth. This one got a substantial and in places rather odd discussion on Reddit recently.

Watch more videos on

Hilary Lawson starts with the premise that Wittgenstein and others have sufficiently demonstrated that there is no way of establishing objective truth; but we can’t, he says, rest there. He thinks that we can get a practical way forward if we note that the world is open but we can close it in various ways and some of them work better for us than others.

Perhaps an analogy might be (as it happens) the ideas of truth and falsity themselves in formal logic. Classical logic assigns only two values to propositions; true or not true. People often feel this is unintuitive. We can certainly devise formal logics with more than two values – we could add one for ‘undetermined’, say. This is not a matter of what’s right or wrong; we can carve up our system of logic any way we like. The thing is, two-valued logic just gives us a lot more results than its rivals. One important reason is that if we can exclude a premise, in a two-value system its negation must be true; that move doesn’t work if there are three or more values). So it’s not that two-valued logic is true and the others are false, it’s just that doing it the two-valued way gets us more. Perhaps something similar might be true of the different way we might carve up the world.

John Searle, by video (and briefly doing that thing old people seem mysteriously prone to; sinking to the bottom of the frame as though peering over a wall) goes for common sense, as ever, albeit cloaked in technical terms. He distinguishes between epistemic and ontological senses of objectivity. Our views are unavoidably ontologically subjective to some degree (ie, different people have different views:  ‘perspectivalism’ is true); but that does not at all entail that epistemic objectivity is unattainable; indeed, if we didn’t assume some objective truths we couldn’t get started on discussion. That’s a robust refutation of the view that perspectivalism implies no objective truth, though I’m not sure that’s quite the case Lawson was making.  Perhaps we could argue that after all, there are such things as working assumptions; to say ‘let’s treat this as true and see where we get’ does not necessarily require belief in objectively determinable truth.

Hannah Dawson seems to argue emphatically on both sides; no two members of a class gave the same account of an assembly (though I bet they could all agree that no pink elephant walked in half-way through). It seems the idea of objective truth sits uneasily in history;  but no-one can deny the objective factuality of the Holocaust; sometimes, after all, reality does push back. This may be an expression of the curious point that it often seems easier to say that nothing is objectively true than it is to say that nothing is objectively false, illogical as that is.

Dawson’s basic argument looks to me a bit like an example of ‘panic scepticism’; no perfect objective account of an historical event is possible, therefore nothing at all is objectively true. I think we get this kind of thing in philosophy of mind too; people seem to argue that our senses mislead us sometimes, therefore we have no knowledge of external reality (there are better arguments for similar conclusions, of course). Maybe after all we can find ways to make do with imperfect knowledge.

neural-netInteresting piece here reviewing the way some modern machine learning systems are unfathomable. This is because they learn how to do what they do, rather than being set up with a program, so there is no reassuring algorithm – no set of instructions that tells us how they work. In fact they way they make their decisions may be impossible to grasp properly even if we know all the details because it just exceeds in brute complexity what we can ever get our minds around.

This is not really new. Neural nets that learn for themselves have always been a bit inscrutable. One problem with this is brittleness: when the system fails it may not fail in ways that are safe and manageable, but disastrously. This old problem is becoming more important mainly because new approaches to deep machine learning are doing so well; all of a sudden we seem to be getting a rush of new systems that really work effectively at quite complex real world tasks. The problems are no longer academic.

Brittle behaviour may come about when the system learns its task from a limited data set. It does not understand the data and is simply good at picking out correlations, so sometimes it may pick out features of the original data set that work well within that set, and perhaps even work well on quite a lot of new real world data, but don’t really capture what’s important. The program is meant to check whether a station platform is dangerously full of people, for example; in the set of pictures provided it finds that all it needs to do is examine the white platform area and check how dark it is. The more people there are, the darker it looks. This turns out to work quite well in real life, too. Then summer comes and people start wearing light coloured clothes…

There are ways to cope with this. We could build in various safeguards. We could make sure we use big and realistic datasets for training or perhaps allow learning to continue in real world contexts. Or we could just decide never to use a system that doesn’t have an algorithm we can examine; but there would be a price to pay in terms of efficiency for that; it might even be that we would have to give up on certain things that can only be effectively automated with relatively sophisticated deep learning methods. We’re told that the EU contemplates a law embodying a right to explanations of how software works. To philosophers I think this must sound like a marvellous new gravy train, as there will obviously be a need to adjudicate what counts as an adequate explanation, a notoriously problematic issue. (I am available as a witness in any litigation for reasonable hourly fees.)

The article points out that the incomprehensibility of neural network-based systems is in some ways really quite like the incomprehensibility of the good old human brain. Why wouldn’t it be? After all, neural nets were based on the brain. Now it’s true that even in the beginning they were very rough approximations of real neurology and in practical modern systems the neural qualities of neural nets are little more than a polite fiction. Still, perhaps there are properties shared by all learning systems?

One reason deep learning may run into problems is the difficulty AI always has in dealing with relevance.  The ability to spot relevance no doubt helps the human brain check whether it is learning about the right kind of thing, but it has always been difficult to work out quite how our brains do it, and this might mean an essential element is missing from AI approaches.

It is tempting, though, to think that this is in part another manifestation of the fact that AI systems get trained on limited data sets. Maybe the radical answer is to stop feeding them tailored data sets and let  our robots live in the real world; in other words, if we want reliable deep learning perhaps our robots have to roam free and replicate the wider human experience of the world at large? To date the project of creating human-style cognition has been in some sense motivated by mere curiosity (and yes, by the feeling that it would be pretty cool to have a robot pal) ; are we seeing here the outline of an argument that human-style AGI might actually be the answer to genuine engineering problems?

What about those explanations? Instead of retaining philosophers and lawyers to argue the case, could we think about building in a new module to our systems, one that keeps overall track of the AI and can report the broad currents of activity within it? It wouldn’t be perfect but it might give us broad clues as to why the system was making the decisions it was, and even allow us to delicately feed in some guidance. Doesn’t such a module start to sound like, well, consciousness? Could it be that we are beginning to see the outline of the rationales behind some of God’s design choices?

twinsCan we solve the Hard Problem with scanners? This article by Brit Brogaard and Dimitria E. Gatzia argues that recent advances in neuroimaging techniques, combined with the architectonic approach advocated by Fingelkurts and Fingelkurts, open the way to real advances.

But surely it’s impossible for physical techniques to shed any light on the Hard Problem? The whole point is that it is over and above any account which could be given by physics. In the Zombie Twin though experiment I have a physically identical twin who has no subjective experience. His brain handles information just the way mine does, but when he registers the colour red, it’s just data; he doesn’t experience real redness. If you think that is conceivable, then you believe in qualia, the subjective extra part of experience. But how could qualia be explained by neuroimaging; my zombie twin’s scans are exactly the same as mine, yet he has no qualia at all?

This, I think, is where the architectonics come in. The foundational axiom of the approach, as I understand it, is that the functional structure of phenomenal experience corresponds to dynamic structure within brain activity; the operational architectonics provide the bridge . (I call it an axiom, but I think the Fingelkurts twins would say that empirical research already provides support for a nested hierarchical structure which bridges the explanatory gap. They seem to take the view that operational architectonics uses a structured electrical field, which on the one hand links their view with the theories of Johnjoe McFadden and Sue Pockett, while on the other making me wonder whether advances in neuroimaging are relevant if the exciting stuff is happening outside the neurons.) It follows that investigating dynamic activity structures in the brain can tell us about the structure of phenomenal, subjective experience. That seems reasonable. After all, we might argue, qualia may be mysterious, but we know they are related to physical events; the experience of redness goes with the existence of red things in the physical world (with due allowance for complications). Why can’t we assume that subjective experience also goes with certain structured kinds of brain activity?

Two points must be made immediately. The first is that the hunt for Neural Correlates of Consciousness (NCCs) is hardly new. The advocates of architectonics, however, say that approaches along these lines fail because correlation is simply too weak a connection. Noticing that experience x and activation in region y correlate doesn’t really take us anywhere. They aim for something much harder-edged and more specific, with structured features of brain activity matched directly back to structures in an analysis of phenomenal experience (some of the papers use the framework of Revonsuo, though architectonics in general is not committed to any specific approach).

The second point is that this is not a sceptical or reductive project. I think many sceptics about qualia would be more than happy with the idea of exploring subjective experience in relation to brain structure; but someone like Dan Dennett would look to the brain structures to fully explain all the features of experience; to explain them away, in fact, so that it was clear that brain activity was in the end all we were dealing with and we could stop talking about ‘nonsensical’ qualia altogether.

By contrast the architectonic approach allows philosophers to retain the ultimate mystery; it just seeks to push the boundaries of science a bit further out into the territory of subjective experience. Perhaps Paul Churchland’s interesting paper about chimerical colours which we discussed a while ago provides a comparable case if not strictly an example.

Churchland points out that we can find the colours we experience mapped out in the neuronal structures of the brain; but interestingly the colour space defined in the brain is slightly more comprehensive than the one we actually encounter in real life. Our brains have reserved spaces for colours that do not exist, as it were. However, using a technique he describes we can experience these ‘chimerical’ colours, such as ‘dark yellow’ in the form of an afterglow. So here you experience for the first time a dark yellow quale, as predicted and delivered by neurology. Churchland would argue this shows rather convincingly that position in your brain’s colour space is essentially all there is to the subjective experience of colour. I think a follower of architectonics would commend the research for elucidating structural features of experience but hold that there was still a residual mystery about what dark yellow qualia really are in themselves, one that can only be addressed by philosophy.

It all seems like a clever and promising take on the subject to me; I do have two reservations. The first is a pessimistic doubt about whether it will ever really be possible to deliver much. The sort of finding reported by Churchland is the exception more than the rule. Vision and hearing offer some unusual scope because they both depend on wave media which impose certain interesting structural qualities; the orderly spectrum and musical scale. Imaginatively I find it hard to think of other aspects of phenomenal experience that seem to be good candidates for structural analysis. I could be radically wrong about this and I hope I am.

The other thing is, I still find it a bit hard to get past my zombie twin; if phenomenal experience matches up with the structure of brain activity perfectly, how come he is without qualia? The sceptics and the qualophiles both have pretty clear answers; either there just are no qualia anyway or they are outside the scope of physics. Now if we take the architectonic view, we could argue that just as the presence of red objects is not sufficient for there to be red qualia, so perhaps the existence of the right brain patterns isn’t sufficient either; though the red objects and the relevant brain activity do a lot to explain the experience. But if the right brain activity isn’t sufficient, what’s the missing ingredient? It feels (I put it no higher) as if there ought to be an explanation; but perhaps that’s just where we leave the job for the philosophers?

doorknob‘…stupid as a doorknob…’ Just part of Luboš Motl’s vigorous attack on Scott Aaronson’s critique of IIT, the Integrated Information Theory of Giulio Tononi.

To begin at the beginning. IIT says that consciousness arises from integrated information, and proposes a mathematical approach to quantifying the level of integrated information in a system, a quantity it names Phi (actually there are several variant ways to define Phi that differ in various details, which is perhaps unfortunate). Aaronson and Motl both describe this idea as a worthy effort but both have various reservations about it – though Aaronson thinks the problems are fatal while Motl thinks IIT offers a promising direction for further work.

Both pieces contain a lot of interesting side discussion, including Aaronson’s speculation that approximating phi for a real brain might be an NP-hard problem. This is the digression that prompted the doorknob comment: so what if it were NP-hard, demands Motl; you think nature is barred from containing NP-hard problems?

The real crux as I understand it is Aaronson’s argument that we can give examples of systems with high scores for Phi that we know intuitively could not be conscious. Eric Schwitzgebel has given a somewhat similar argument but cast in more approachable terms; Aaronson uses a Vandermonde matrix for his example of a high-phi but intuitively non-conscious entity, whereas Schwitzgebel uses the United States.

Motl takes exception to Aaronson’s use of intuition here. How does he know that his matrix lacks consciousness? If Aaronson’s intuition is the test, what’s the point of having a theory? The whole point of a theory is to improve on and correct our intuitive judgements, isn’t it? If we’re going to fall back on our intuitions argument is pointless.

I think appeals to intuition are rare in physics, where it is probably natural to regard them as illegitimate, but they’re not that unusual in philosophy, especially in ethics. You could argue that G.E. Moore’s approach was essentially to give up on ethical theory and rely on intuition instead. Often intuition limits what we regard as acceptable theorising, but our theories can also ‘tutor’ and change our intuitions. My impression is that real world beliefs about death, for example, have changed substantially in recent decades under the influence of utilitarian reasoning; we’re now much less likely to think that death is simply forbidden and more likely to accept calculations about the value of lives. We still, however, rule out as unintuitive (‘just obviously wrong’) such utilitarian conclusions as the propriety of sometimes punishing the innocent.

There’s an interesting question as to whether there actually is, in itself, such a thing as intuition. Myself I’d suggest the word covers any appealing pre-rational thought; we use it in several ways. One is indeed to test our conclusions where no other means is available; it’s interesting that Motl actually remarks that the absence of a reliable objective test of consciousness is one of IIT’s problems; he obviously does not accept that intuition could be a fall-back, so he is presumably left with the gap (which must surely affect all theories of consciousness). Philosophers also use an appeal to intuition to help cut to the chase, by implicitly invoking shared axioms and assumptions; and often enough ‘thought experiments’ which are not really experiments at all but in the Dennettian phrase ‘intuition pumps’ are used for persuasive effect; they’re not proofs but they may help to get people to agree.

Now as a matter of fact I think in Aaronson’s case we can actually supply a partial argument to replace pure intuition. In this discussion we are mainly talking about subjective consciousness, the ‘something it is like’ to experience things. But I think many people would argue that that Hard Problem consciousness requires the Easy Problem kind to be in place first as a basis. Subjective experience, we might argue, requires the less mysterious apparatus of normal sensory or cognitive experience; and Aaronson (or Schwitzgebel) could argue that their example structures definitely don’t have the sort of structure needed for that, a conclusion we can reach through functional argument without the need for intuition,

Not everybody would agree, though; some, especially those who lean towards panpsychism and related theories of ‘consciousness everywhere’ might see nothing wrong with the idea of subjective consciousness without the ‘mechanical’ kind. The standard philosophical zombie has Easy Problem consciousness without qualia; these people would accept an inverted zombie who has qualia with no brain function. It seems a bit odd to me to pair such a view with IIT (if you don’t think functional properties are required I’d have thought you would think that integrating information was also dispensable) but there’s nothing strictly illogical about it. Perhaps the dispute over intuition really masks a different disagreement, over the plausibility of such inverted zombies, obviously impossible in  Aaronson’s eyes, but potentially viable in Motl’s?

Motl goes on to offer what I think is a rather good objection to IIT as it stands; ie that it seems to award consciousness to ‘frozen’ or static structures if they have a high enough Phi score. He thinks it’s necessary to reformulate the idea to capture the point that consciousness is a process. I agree – but how does Motl know consciousness requires a process? Could it be that it’s just…  intuitively obvious?

What is experience? An interesting discussion from the Institute of Art and Ideas, featuring David Chalmers, Susana Martinez-Conde and Peter Hacker.

Chalmers seems to content himself with restating the Hard Problem; that is, that there seems to be something in experience which is mysteriously over and above the account given by physics. He seems rather nervous, but I think it’s just the slight awkwardness typical of a philosopher being asked slightly left-field questions.

Martinez-Conde tells us we never really experience reality, only a neural simulation of it. I think it’s a mistake to assume that because experience seems to be mediated by our sensory systems, and sometimes misleads us, it never shows us external reality. That’s akin to thinking that because some books are fiction no book really addresses reality.

Hacker smoothly dismisses the whole business as a matter of linguistic and conceptual confusion. Physics explains its own domain, but we shouldn’t expect it to deal with experience, any more than we expect it to explain love, or the football league. He is allowed to make a clean get-away with this neat proposition, although we know, for example, that physical electrodes in the brain can generate and control experiences; and we know that various illusions and features of experience have very good physiological explanations. Hacker makes it seem that there is a whole range of domains, each with its own sealed off world of explanation; but surely love, football and the others are just sub-domains of the mental realm? Though we don’t yet know how this works there is plenty of evidence that the mental domain is at least causally dependent on physics, if not reducible to it. That’s what the discussion is all about. We can imagine Hacker a few centuries ago assuring us loftily that the idea of applying ordinary physics to celestial mechanics was a naive category error. If only Galileo had read up on his Oxford philosophy he would realise that the attempt to explain the motion of the planets in terms of physical forces was doomed to end in unresolvable linguistic bewitchment!

I plan to feature more of these discussion videos as a bit of a supplement to the usual menu here, by the way.

Edward WittenWe’ll never understand consciousness, says Edward Witten. Ashutosh Jogalekar’s post here features a video of the eminent physicist talking about fundamentals; the bit about consciousness starts around 1:10 if you’re not interested in string theory and cosmology. John Horgan has also weighed in with some comments; Witten’s view is congenial to him because of his belief that science may be approaching an end state in which many big issues are basically settled while others remain permanently mysterious. Witten himself thinks we might possibly get a “final theory” of physics (maybe even a form of string theory), but guesses that it would be of a tricky kind, so that understanding and exploring the theory would itself be an endless project, rather the way number theory, which looks like a simple subject at first glance, proves to be capable of endless further research.

Witten, in response to a slightly weird question from the interviewer, declines to define consciousness, saying he prefers to leave it undefined like one of the undefined terms set out at the beginning of a maths book. He feels confident that the workings of the mind will be greatly clarified by ongoing research so that we will come to understand much better how the mechanisms operate. But why these processes are accompanied by something like consciousness seems likely to remain a mystery; no extension of physics that he can imagine seems likely to do the job, including the kind of new quantum mechanics that Roger Penrose believes is needed.

Witten is merely recording his intuitions, so we shouldn’t try to represent him as committed to any strong theoretical position; but his words clearly suggest that he is an optimist on the so-called Easy Problem and a pessimist on the Hard one. The problem he thinks may be unsolvable is the one about why there is “something it is like” to have experiences; what it is that seeing a red rose has over and above the acquisition of mere data.

If so, I think his incredulity joins a long tradition of those who feel intuitively that that kind of consciousness just is radically different from anything explained or explainable by physics. Horgan mentions the Mysterians, notably Colin McGinn, who holds that our brain just isn’t adapted to understanding how subjective experience and the physical world can be reconciled; but we could also invoke Brentano’s contention that mental intentionality is just utterly unlike any physical phenomenon; and even trace the same intuition back to Leibniz’s famous analogy of the mill; no matter what wheels and levers you put in your machine, there’s never going to be anything that could explain a perception (particularly telling given Leibniz’s enthusiasm for calculating machines and his belief that one day thinkers could use them to resolve complex disputes). Indeed, couldn’t we argue that contemporary consciousness sceptics like Dennett and the Churchlands also see an unbridgeable gap between physics and subjective, qualia-having consciousness? The difference is simply that in their eyes this makes that kind of consciousness nonsense, not a mystery.

We have to be a bit wary of trusting our intuitions. The idea that subjective consciousness arises when we’ve got enough neurons firing may sound like the idea that wine comes about when we’ve added enough water to the jar; but the idea that enough ones and zeroes in data registers could ever give rise to a decent game of chess looks pretty strange too.

As those who’ve read earlier posts may know, I think the missing ingredient is simply reality. The extra thing about consciousness that the theory of physics fails to include is just the reality of the experience, the one thing a theory can never include. Of course, the nature of reality is itself a considerable mystery, it just isn’t the one people have thought they were talking about. If I’m right, then Witten’s doubts are well-founded but less worrying than they may seem. If some future genius succeeds in generating an artificial brain with human-style mental functions, then by looking at its structure we’ll only ever see solutions to the Easy Problem, just as we may do in part when looking at normal biological brains. Once we switch on the artificial brain and it starts doing real things, then experience will happen.

mountaineerFree solo style climbers need their heads examined. That seems to be the premise of the investigation reported here. Alex Honnold does amazingly scary things in his solo climbs, all without ropes or any kind of effective protection. Just watching, or just looking at pictures, is enough to make most of us shudder; a neurobiologist came to the conclusion that Honnold’s amygdala wasn’t working.

Why would he think that? The amygdala, or amygdalae, are two small organs within the brain that are generally considered to have a role in producing fear and aversion. A friend of mine once suggested they could be renamed after the moons of Mars as Phobos and Deimos – ‘Fear’ and ‘Loathing’ in Greek. In fact that wouldn’t be at all accurate, not least because the left amygdala seems to produce positive emotional reactions as well as negative ones. The broad initial analysis of Honnold’s behaviour seems to have been that his rational cortex was getting him into perilous situations because his amygdala was failing to wave the red flag. In some ways that seems odd: I think my rational, future-planning cortex would keep me the hell away from anything like the cliff faces Honnold climbs, while it might be the emotional thrill-enjoying parts of my brain that impelled me towards them.

A scan revealed that Honnold’s amygdalae were both present and correct, without any signs of damage; however, they didn’t seem to respond to various scary or unpleasant pictures in the way a normal person’s would. This knocks out one strong version of the theory. If there had been visible lesions in Honnold’s amygdalae, there would have been strong reason to suspect that his behaviour stemmed from that damage; but we knew already that he isn’t as scared as the rest of us, so finding that his amygdalae react less than most merely gives us another version of the finding that mental differences are associated with brain differences and vice versa. We sort of knew that; if that’s all we’ve found out we’re sailing dangerously close to the sea of neurobollocks and scannamania.

It is possible to do without amygdalae altogether. SM is a patient reported on by Damasio and others, who lost both amygdalae as a result of Urbach-Wiethe disease. She did not take up free solo style climbing or other dangerous sports, but she shows a distinct lack of fearful and aversive reactions to strange people and other triggers of fear and distrust. She has suffered a number of violent encounters which might partly have been the result of the lack of fear which allowed her, for example, to walk through dubious parks at night; but it may also arguably have got her out of some dangerous situations through her panic-free Spock-like calm and non-hostile responses. It seems she lives in an area where violent crime is common in any case, and she has succeeded in bringing up three children independently.

It could be that amygdalae function somewhat differently in men and women, which might explain why Honnold’s supposed problem results in dangerous activity while SM’s mainly lead her to hug and trust strangers. There are known differences in the pattern of development; female amygdalae develop fully earlier, while male ones go on growing longer and end up bigger. Those differences might simply reflect general differences in growth pattern, though there is also some evidence of different patterns of activation; it’s possible, for example, that the activation of female amygdalae tends to promote thought while the male equivalent promotes action. All the usual caveats apply and great caution is in order. Let’s also remember that SM and Honnold are both one-off cases; that SM suffered damage to other parts of her brain – and that Honnold says he does feel fear, and that his amygdalae appear to be perfectly normal.

Are they though? The research showed an almost total lack of response to pictures of terrible injuries and other things that would normally be expected to evoke a strong reaction from the amygdala. So perhaps there is something abnormal going on after all? Maybe there is damage too subtle to detect? Or maybe something is suppressing the amygdala?

The identification and handling of threats by the brain is actually a complicated business. Many quite low-level systems as well as highly-sophisticated ones can make a contribution (a sudden loud growl can cause a wave of fear; so can a few quiet words from a doctor).  The role of the amygdala seems to be as much to do with memory as fear; it pays attention to things that we have found are associated with really bad (or sometimes good) experiences and helps direct our attention to the right things, reminding us to look at people’s eyes when we want to assess whether they are frightened, for example.  The interplay may be very complex, but even on a pretty crude interpretation there might be conscious processes that sometimes shut the amygdala down:

Visual:  furry, claws, animate, ursine: yup, over 99% positive that’s a bear. Hey, amygdala, big animal for you?

Amygdala: OMFG run for our life!

Cortex: guys, this is a zoo, there are bars – Visual, confirm stout bars – OK. Amygdala, STFU.

It doesn’t always work like that, of course. Cortex knows that we can happily walk along a narrow plank no wider than the terrifying Thank God Ledge if it is a few centimetres off the ground, but saying so repeatedly will not stop amygdala sounding the alarm.

Ultimately it may be that Honnold’s different behaviour and different amygdala activation are simply two facets of his different personality.


handful of dustNew ways to monitor – and control – neurons are about to become practical. A paper in Neuron by Seo et al describes how researchers at Berkeley created “ultrasonic neural dust” that allowed activity in muscles and nerves to be monitored without traditional electrodes. The technique has not been applied to the brain and has been used only for monitoring, not for control, but the potential is clear, and this short piece in Aeon reviewing the development of comparable techniques concludes that it is time to take these emergent technologies seriously. The diagnostic and therapeutic potential of being able to directly monitor and intervene in the activity of nerves and systems all over the body is really quite mind-boggling; in principle it could replace and enhance all sorts of drug treatments and other interventions in immensely beneficial ways.

From a research point of view the possibility of getting single-neuron level data on an ongoing basis could leap right over the limitations of current scanning technology and tell us, really for the first time, exactly what is going on in the brain. It’s very likely that unexpected and informative discoveries would follow. Some caution is of course in order; for one thing I imagine placement techniques are going to raise big challenges. Throwing a handful of dust into a muscle to pick up its activity is one thing; placing a single mote in a particular neuron is another. If we succeed with that, I wonder whether we will actually be able to cope with the vast new sets of data that could be generated.

Still the way ahead seems clear enough to justify a bit of speculation about mind control.  The ethics are clearly problematic, but let’s start with a broad look at the practicalities. Could we control someone with neural dust?

The crudest techniques are going to be the easiest to pull off. Incapacitating or paralysing someone looks pretty achievable; it could be a technique for confining prisoners (step beyond this line and your leg muscles seize up) or perhaps as a secret fall-back disabling mechanism inserted into suspects and released prisoners.  If they turn up in a theatening role later, you can just switch them off. Killing someone by stopping their heart looks achievable, and the threat of doing so could in theory be used to control hostages or perhaps create ‘human drones’ (I apologise for the repellent nature of some of these ideas; forewarned is forearmed).

Although reading off thoughts is probably too ambitious for the foreseeable future, we might be able to monitor the brain’s states or arousal and perhaps even identify the recognition of key objects or people. I cannot see any obvious reason why remote monitoring of neural dust implants couldn’t pick up a kind of video feed from the optic nerve. People might want that done to themselves as a superior substitute for Google Glass and the like; indeed neural dust seems to offer new scope for the kind of direct brain control of technology that many people seem keen to have. Myself I think the output systems already built into human beings – hands, voice – are hard to beat.

Taking direct and outright control of someone’s muscles and making a kind of puppet of them seems likely to be difficult; making a muscle twitch is a long way from the kind of fluid and co-ordinated control required for effective movement. Devising the torrent of neural signals required looks like a task which is computationally feasible in principle but highly demanding; you would surely look to deep learning techniques, which in a sense were created for exactly this kind of task since they began with the imitation of neural networks.  A basic approach that might be achievable relatively early would be to record stereotyped muscular routines and then play them back like extended reflexes, though that wouldn’t work well for many basic tasks like walking that require a lot of feedback.

Could we venture further and control someone’s own attitudes and thoughts? Again the unambitious and destructive techniques are the easiest; making someone deranged or deluded is probably the most straighforward mental change to bring about. Giving them bad dreams seems likely to be a feasible option.  Perhaps we could simulate drunkenness – or turn it off – I suspect that would need massive but non-specific intervention, so it might be relatively achievable. Simulation of the effects of other drugs might be viable on similar terms, whether to impair performance, enhance it, or purely for pleasure. We might perhaps be able to stimulate paranoia, exhilaration, religiosity or depression, albeit without fully predictable results.

Indirect manipulation is the next easiest option for mind control; we might arrange, for example, to have a flood of good feelings or fear and aversion every time particular political candidates are seen, for example; it wouldn’t force the subject to vote a particular way but it might be heavily influential. I’m not sure it’s a watertight technique as the human mind seems easily able to hold contradictory attitudes and sentiments and widespread empirical evidence suggest many people must be able to go on voting for someone who appears repellent.

Could we, finally, take over the person themselves, feeding in whatever thoughts we chose? I rather doubt that this is ever going to be possible. True, our mental selves must ultimately arise from the firing of neurons, and ex hypothesi we can control all those neurons; but the chances are there is no universal encoding of thoughts; we may not even think the same thought with the same neurons a second time around. The fallback of recording and playing back the activity of a broad swathe of brain tissue might work up to a point if you could be sure that you had included the relevant bits of neural activity, but the results, even if successful, would be more like some kind of  malign mental episode than a smooth take over of the personality. Easier, I suspect, to erase a person than control one in this strong sense. As Hamlet pointed out, knowing where the holes on a flute are doesn’t make you able to play a tune. I can hardly put it better than Shakespeare…

Why, look you now, how unworthy a thing you make of
me! You would play upon me; you would seem to know
my stops; you would pluck out the heart of my
mystery; you would sound me from my lowest note to
the top of my compass: and there is much music,
excellent voice, in this little organ; yet cannot
you make it speak. ‘Sblood, do you think I am
easier to be played on than a pipe? Call me what
instrument you will, though you can fret me, yet you
cannot play upon me.

introspection2We don’t know what we think, according to Alex Rosenberg in the NYT. It’s a piece of two halves, in my opinion; he starts with a pretty fair summary of the sceptical case. It has often been held that we have privileged knowledge of our own thoughts and feelings, and indeed of our own decisions; but the findings of Benjamin Libet about decisions being made before we are aware of them; the phenomenon of blindsight which shows we may go on having visual knowledge we’re not aware of; and many other cases where it can be shown that motives are confabulated and mental content is inaccessible to our conscious, reporting mind; these all go to show that things are much more complex than we might have thought, and that our thoughts are not, as it were, self-illuminating. Rosenberg plausibly suggests that we use on ourselves the kind of tools we use to work out what other people are thinking; but then he seems to make a radical leap to the conclusion that there is nothing else going on.

Our access to our own thoughts is just as indirect and fallible as our access to the thoughts of other people. We have no privileged access to our own minds. If our thoughts give the real meaning of our actions, our words, our lives, then we can’t ever be sure what we say or do, or for that matter, what we think or why we think it.

That seems to be going too far.  How could we ever play ‘I spy’ if we didn’t have any privileged access to private thoughts?

“I spy, with my little eye, something beginning with ‘c'”
“Is it ‘chair’?”
“I don’t know – is it?”

It’s more than possible that Rosenberg’s argument has suffered badly from editing (philosophical discussion, even in a newspaper piece, seems peculiarly information-dense; often you can’t lose much of it without damaging the content badly). But it looks as if he’s done what I think of as an ‘OMG bounce’; a kind of argumentative leap which crops up elsewhere. Sometimes we experience illusions:  OMG, our senses never tell us anything about the real world at all! There are problems with the justification of true belief: OMG there is no such thing as knowledge! Or in this case: sometimes we’re wrong about why we did things: OMG, we have no direct access to our own thoughts!

There are in fact several different reasons why we might claim that our thoughts about our thoughts are immune to error. In the game of ‘I spy’, my nominating ‘chair’ just makes it my choice; the content of my thought is established by a kind of fiat. In the case of a pain in my toe, I might argue I can’t be wrong because a pain can’t be false: it has no propositional content, it just is. Or I might argue that certain of my thoughts are unmediated; there’s no gap between them and me where error could creep in, the way it creeps in during the process of interpreting sensory impressions.

Still, it’s undeniable that in some cases we can be shown to adopt false rationales for our behaviour; sometimes we think we know why we said something, but we don’t. I think by contrast I have occasionally, when very tired, had the experience of hearing coherent and broadly relevant speech come out of my own mouth without it seeming to come from my conscious mind at all. Contemplating this kind of thing does undoubtedly promote scepticism, but what it ought to promote is a keener awareness of the complexity of human mental experience: many layered, explicit to greater or lesser degrees, partly attended to, partly in a sort of half-light of awareness… There seem to be unconscious impulses, conscious but inexplicit thought; definite thought (which may even be in recordable words); self-conscious thought of the kind where we are aware of thinking while we think… and that is at best the broadest outline of some of the larger architecture.

All of this really needs a systematic and authoritative investigation. Of course, since Plato there have been models of the structure of the mind which separate conscious and unconscious, id, ego and superego: philosophers of mind have run up various theories, usually to suit their own needs of the moment; and modern neurology increasingly provides good clues about how various mental functions are hosted and performed. But a proper mainstream conception of the structure and phenomenology of thought itself seems sadly lacking to me. Is this an area where we could get funding for a major research effort; a Human Phenomenology Project?

It can hardly be doubted that there are things to discover. Recently we were told, if not quite for the first time, that a substantial minority of people have no mental images (although at once we notice that there even seen to be different ways of having mental images). A systematic investigation might reveal that just as we have four blood groups, there are four (or seven) different ways the human mind can work. What if it turned out that consciousness is not a single consistent phenomenon, but a family of four different ones, and that the four tribes have been talking past each other all this time…?