Archive for August, 2016

doorknob‘…stupid as a doorknob…’ Just part of Luboš Motl’s vigorous attack on Scott Aaronson’s critique of IIT, the Integrated Information Theory of Giulio Tononi.

To begin at the beginning. IIT says that consciousness arises from integrated information, and proposes a mathematical approach to quantifying the level of integrated information in a system, a quantity it names Phi (actually there are several variant ways to define Phi that differ in various details, which is perhaps unfortunate). Aaronson and Motl both describe this idea as a worthy effort but both have various reservations about it – though Aaronson thinks the problems are fatal while Motl thinks IIT offers a promising direction for further work.

Both pieces contain a lot of interesting side discussion, including Aaronson’s speculation that approximating phi for a real brain might be an NP-hard problem. This is the digression that prompted the doorknob comment: so what if it were NP-hard, demands Motl; you think nature is barred from containing NP-hard problems?

The real crux as I understand it is Aaronson’s argument that we can give examples of systems with high scores for Phi that we know intuitively could not be conscious. Eric Schwitzgebel has given a somewhat similar argument but cast in more approachable terms; Aaronson uses a Vandermonde matrix for his example of a high-phi but intuitively non-conscious entity, whereas Schwitzgebel uses the United States.

Motl takes exception to Aaronson’s use of intuition here. How does he know that his matrix lacks consciousness? If Aaronson’s intuition is the test, what’s the point of having a theory? The whole point of a theory is to improve on and correct our intuitive judgements, isn’t it? If we’re going to fall back on our intuitions argument is pointless.

I think appeals to intuition are rare in physics, where it is probably natural to regard them as illegitimate, but they’re not that unusual in philosophy, especially in ethics. You could argue that G.E. Moore’s approach was essentially to give up on ethical theory and rely on intuition instead. Often intuition limits what we regard as acceptable theorising, but our theories can also ‘tutor’ and change our intuitions. My impression is that real world beliefs about death, for example, have changed substantially in recent decades under the influence of utilitarian reasoning; we’re now much less likely to think that death is simply forbidden and more likely to accept calculations about the value of lives. We still, however, rule out as unintuitive (‘just obviously wrong’) such utilitarian conclusions as the propriety of sometimes punishing the innocent.

There’s an interesting question as to whether there actually is, in itself, such a thing as intuition. Myself I’d suggest the word covers any appealing pre-rational thought; we use it in several ways. One is indeed to test our conclusions where no other means is available; it’s interesting that Motl actually remarks that the absence of a reliable objective test of consciousness is one of IIT’s problems; he obviously does not accept that intuition could be a fall-back, so he is presumably left with the gap (which must surely affect all theories of consciousness). Philosophers also use an appeal to intuition to help cut to the chase, by implicitly invoking shared axioms and assumptions; and often enough ‘thought experiments’ which are not really experiments at all but in the Dennettian phrase ‘intuition pumps’ are used for persuasive effect; they’re not proofs but they may help to get people to agree.

Now as a matter of fact I think in Aaronson’s case we can actually supply a partial argument to replace pure intuition. In this discussion we are mainly talking about subjective consciousness, the ‘something it is like’ to experience things. But I think many people would argue that that Hard Problem consciousness requires the Easy Problem kind to be in place first as a basis. Subjective experience, we might argue, requires the less mysterious apparatus of normal sensory or cognitive experience; and Aaronson (or Schwitzgebel) could argue that their example structures definitely don’t have the sort of structure needed for that, a conclusion we can reach through functional argument without the need for intuition,

Not everybody would agree, though; some, especially those who lean towards panpsychism and related theories of ‘consciousness everywhere’ might see nothing wrong with the idea of subjective consciousness without the ‘mechanical’ kind. The standard philosophical zombie has Easy Problem consciousness without qualia; these people would accept an inverted zombie who has qualia with no brain function. It seems a bit odd to me to pair such a view with IIT (if you don’t think functional properties are required I’d have thought you would think that integrating information was also dispensable) but there’s nothing strictly illogical about it. Perhaps the dispute over intuition really masks a different disagreement, over the plausibility of such inverted zombies, obviously impossible in  Aaronson’s eyes, but potentially viable in Motl’s?

Motl goes on to offer what I think is a rather good objection to IIT as it stands; ie that it seems to award consciousness to ‘frozen’ or static structures if they have a high enough Phi score. He thinks it’s necessary to reformulate the idea to capture the point that consciousness is a process. I agree – but how does Motl know consciousness requires a process? Could it be that it’s just…  intuitively obvious?

What is experience? An interesting discussion from the Institute of Art and Ideas, featuring David Chalmers, Susana Martinez-Conde and Peter Hacker.

Chalmers seems to content himself with restating the Hard Problem; that is, that there seems to be something in experience which is mysteriously over and above the account given by physics. He seems rather nervous, but I think it’s just the slight awkwardness typical of a philosopher being asked slightly left-field questions.

Martinez-Conde tells us we never really experience reality, only a neural simulation of it. I think it’s a mistake to assume that because experience seems to be mediated by our sensory systems, and sometimes misleads us, it never shows us external reality. That’s akin to thinking that because some books are fiction no book really addresses reality.

Hacker smoothly dismisses the whole business as a matter of linguistic and conceptual confusion. Physics explains its own domain, but we shouldn’t expect it to deal with experience, any more than we expect it to explain love, or the football league. He is allowed to make a clean get-away with this neat proposition, although we know, for example, that physical electrodes in the brain can generate and control experiences; and we know that various illusions and features of experience have very good physiological explanations. Hacker makes it seem that there is a whole range of domains, each with its own sealed off world of explanation; but surely love, football and the others are just sub-domains of the mental realm? Though we don’t yet know how this works there is plenty of evidence that the mental domain is at least causally dependent on physics, if not reducible to it. That’s what the discussion is all about. We can imagine Hacker a few centuries ago assuring us loftily that the idea of applying ordinary physics to celestial mechanics was a naive category error. If only Galileo had read up on his Oxford philosophy he would realise that the attempt to explain the motion of the planets in terms of physical forces was doomed to end in unresolvable linguistic bewitchment!

I plan to feature more of these discussion videos as a bit of a supplement to the usual menu here, by the way.

Edward WittenWe’ll never understand consciousness, says Edward Witten. Ashutosh Jogalekar’s post here features a video of the eminent physicist talking about fundamentals; the bit about consciousness starts around 1:10 if you’re not interested in string theory and cosmology. John Horgan has also weighed in with some comments; Witten’s view is congenial to him because of his belief that science may be approaching an end state in which many big issues are basically settled while others remain permanently mysterious. Witten himself thinks we might possibly get a “final theory” of physics (maybe even a form of string theory), but guesses that it would be of a tricky kind, so that understanding and exploring the theory would itself be an endless project, rather the way number theory, which looks like a simple subject at first glance, proves to be capable of endless further research.

Witten, in response to a slightly weird question from the interviewer, declines to define consciousness, saying he prefers to leave it undefined like one of the undefined terms set out at the beginning of a maths book. He feels confident that the workings of the mind will be greatly clarified by ongoing research so that we will come to understand much better how the mechanisms operate. But why these processes are accompanied by something like consciousness seems likely to remain a mystery; no extension of physics that he can imagine seems likely to do the job, including the kind of new quantum mechanics that Roger Penrose believes is needed.

Witten is merely recording his intuitions, so we shouldn’t try to represent him as committed to any strong theoretical position; but his words clearly suggest that he is an optimist on the so-called Easy Problem and a pessimist on the Hard one. The problem he thinks may be unsolvable is the one about why there is “something it is like” to have experiences; what it is that seeing a red rose has over and above the acquisition of mere data.

If so, I think his incredulity joins a long tradition of those who feel intuitively that that kind of consciousness just is radically different from anything explained or explainable by physics. Horgan mentions the Mysterians, notably Colin McGinn, who holds that our brain just isn’t adapted to understanding how subjective experience and the physical world can be reconciled; but we could also invoke Brentano’s contention that mental intentionality is just utterly unlike any physical phenomenon; and even trace the same intuition back to Leibniz’s famous analogy of the mill; no matter what wheels and levers you put in your machine, there’s never going to be anything that could explain a perception (particularly telling given Leibniz’s enthusiasm for calculating machines and his belief that one day thinkers could use them to resolve complex disputes). Indeed, couldn’t we argue that contemporary consciousness sceptics like Dennett and the Churchlands also see an unbridgeable gap between physics and subjective, qualia-having consciousness? The difference is simply that in their eyes this makes that kind of consciousness nonsense, not a mystery.

We have to be a bit wary of trusting our intuitions. The idea that subjective consciousness arises when we’ve got enough neurons firing may sound like the idea that wine comes about when we’ve added enough water to the jar; but the idea that enough ones and zeroes in data registers could ever give rise to a decent game of chess looks pretty strange too.

As those who’ve read earlier posts may know, I think the missing ingredient is simply reality. The extra thing about consciousness that the theory of physics fails to include is just the reality of the experience, the one thing a theory can never include. Of course, the nature of reality is itself a considerable mystery, it just isn’t the one people have thought they were talking about. If I’m right, then Witten’s doubts are well-founded but less worrying than they may seem. If some future genius succeeds in generating an artificial brain with human-style mental functions, then by looking at its structure we’ll only ever see solutions to the Easy Problem, just as we may do in part when looking at normal biological brains. Once we switch on the artificial brain and it starts doing real things, then experience will happen.

mountaineerFree solo style climbers need their heads examined. That seems to be the premise of the investigation reported here. Alex Honnold does amazingly scary things in his solo climbs, all without ropes or any kind of effective protection. Just watching, or just looking at pictures, is enough to make most of us shudder; a neurobiologist came to the conclusion that Honnold’s amygdala wasn’t working.

Why would he think that? The amygdala, or amygdalae, are two small organs within the brain that are generally considered to have a role in producing fear and aversion. A friend of mine once suggested they could be renamed after the moons of Mars as Phobos and Deimos – ‘Fear’ and ‘Loathing’ in Greek. In fact that wouldn’t be at all accurate, not least because the left amygdala seems to produce positive emotional reactions as well as negative ones. The broad initial analysis of Honnold’s behaviour seems to have been that his rational cortex was getting him into perilous situations because his amygdala was failing to wave the red flag. In some ways that seems odd: I think my rational, future-planning cortex would keep me the hell away from anything like the cliff faces Honnold climbs, while it might be the emotional thrill-enjoying parts of my brain that impelled me towards them.

A scan revealed that Honnold’s amygdalae were both present and correct, without any signs of damage; however, they didn’t seem to respond to various scary or unpleasant pictures in the way a normal person’s would. This knocks out one strong version of the theory. If there had been visible lesions in Honnold’s amygdalae, there would have been strong reason to suspect that his behaviour stemmed from that damage; but we knew already that he isn’t as scared as the rest of us, so finding that his amygdalae react less than most merely gives us another version of the finding that mental differences are associated with brain differences and vice versa. We sort of knew that; if that’s all we’ve found out we’re sailing dangerously close to the sea of neurobollocks and scannamania.

It is possible to do without amygdalae altogether. SM is a patient reported on by Damasio and others, who lost both amygdalae as a result of Urbach-Wiethe disease. She did not take up free solo style climbing or other dangerous sports, but she shows a distinct lack of fearful and aversive reactions to strange people and other triggers of fear and distrust. She has suffered a number of violent encounters which might partly have been the result of the lack of fear which allowed her, for example, to walk through dubious parks at night; but it may also arguably have got her out of some dangerous situations through her panic-free Spock-like calm and non-hostile responses. It seems she lives in an area where violent crime is common in any case, and she has succeeded in bringing up three children independently.

It could be that amygdalae function somewhat differently in men and women, which might explain why Honnold’s supposed problem results in dangerous activity while SM’s mainly lead her to hug and trust strangers. There are known differences in the pattern of development; female amygdalae develop fully earlier, while male ones go on growing longer and end up bigger. Those differences might simply reflect general differences in growth pattern, though there is also some evidence of different patterns of activation; it’s possible, for example, that the activation of female amygdalae tends to promote thought while the male equivalent promotes action. All the usual caveats apply and great caution is in order. Let’s also remember that SM and Honnold are both one-off cases; that SM suffered damage to other parts of her brain – and that Honnold says he does feel fear, and that his amygdalae appear to be perfectly normal.

Are they though? The research showed an almost total lack of response to pictures of terrible injuries and other things that would normally be expected to evoke a strong reaction from the amygdala. So perhaps there is something abnormal going on after all? Maybe there is damage too subtle to detect? Or maybe something is suppressing the amygdala?

The identification and handling of threats by the brain is actually a complicated business. Many quite low-level systems as well as highly-sophisticated ones can make a contribution (a sudden loud growl can cause a wave of fear; so can a few quiet words from a doctor).  The role of the amygdala seems to be as much to do with memory as fear; it pays attention to things that we have found are associated with really bad (or sometimes good) experiences and helps direct our attention to the right things, reminding us to look at people’s eyes when we want to assess whether they are frightened, for example.  The interplay may be very complex, but even on a pretty crude interpretation there might be conscious processes that sometimes shut the amygdala down:

Visual:  furry, claws, animate, ursine: yup, over 99% positive that’s a bear. Hey, amygdala, big animal for you?

Amygdala: OMFG run for our life!

Cortex: guys, this is a zoo, there are bars – Visual, confirm stout bars – OK. Amygdala, STFU.

It doesn’t always work like that, of course. Cortex knows that we can happily walk along a narrow plank no wider than the terrifying Thank God Ledge if it is a few centimetres off the ground, but saying so repeatedly will not stop amygdala sounding the alarm.

Ultimately it may be that Honnold’s different behaviour and different amygdala activation are simply two facets of his different personality.


handful of dustNew ways to monitor – and control – neurons are about to become practical. A paper in Neuron by Seo et al describes how researchers at Berkeley created “ultrasonic neural dust” that allowed activity in muscles and nerves to be monitored without traditional electrodes. The technique has not been applied to the brain and has been used only for monitoring, not for control, but the potential is clear, and this short piece in Aeon reviewing the development of comparable techniques concludes that it is time to take these emergent technologies seriously. The diagnostic and therapeutic potential of being able to directly monitor and intervene in the activity of nerves and systems all over the body is really quite mind-boggling; in principle it could replace and enhance all sorts of drug treatments and other interventions in immensely beneficial ways.

From a research point of view the possibility of getting single-neuron level data on an ongoing basis could leap right over the limitations of current scanning technology and tell us, really for the first time, exactly what is going on in the brain. It’s very likely that unexpected and informative discoveries would follow. Some caution is of course in order; for one thing I imagine placement techniques are going to raise big challenges. Throwing a handful of dust into a muscle to pick up its activity is one thing; placing a single mote in a particular neuron is another. If we succeed with that, I wonder whether we will actually be able to cope with the vast new sets of data that could be generated.

Still the way ahead seems clear enough to justify a bit of speculation about mind control.  The ethics are clearly problematic, but let’s start with a broad look at the practicalities. Could we control someone with neural dust?

The crudest techniques are going to be the easiest to pull off. Incapacitating or paralysing someone looks pretty achievable; it could be a technique for confining prisoners (step beyond this line and your leg muscles seize up) or perhaps as a secret fall-back disabling mechanism inserted into suspects and released prisoners.  If they turn up in a theatening role later, you can just switch them off. Killing someone by stopping their heart looks achievable, and the threat of doing so could in theory be used to control hostages or perhaps create ‘human drones’ (I apologise for the repellent nature of some of these ideas; forewarned is forearmed).

Although reading off thoughts is probably too ambitious for the foreseeable future, we might be able to monitor the brain’s states or arousal and perhaps even identify the recognition of key objects or people. I cannot see any obvious reason why remote monitoring of neural dust implants couldn’t pick up a kind of video feed from the optic nerve. People might want that done to themselves as a superior substitute for Google Glass and the like; indeed neural dust seems to offer new scope for the kind of direct brain control of technology that many people seem keen to have. Myself I think the output systems already built into human beings – hands, voice – are hard to beat.

Taking direct and outright control of someone’s muscles and making a kind of puppet of them seems likely to be difficult; making a muscle twitch is a long way from the kind of fluid and co-ordinated control required for effective movement. Devising the torrent of neural signals required looks like a task which is computationally feasible in principle but highly demanding; you would surely look to deep learning techniques, which in a sense were created for exactly this kind of task since they began with the imitation of neural networks.  A basic approach that might be achievable relatively early would be to record stereotyped muscular routines and then play them back like extended reflexes, though that wouldn’t work well for many basic tasks like walking that require a lot of feedback.

Could we venture further and control someone’s own attitudes and thoughts? Again the unambitious and destructive techniques are the easiest; making someone deranged or deluded is probably the most straighforward mental change to bring about. Giving them bad dreams seems likely to be a feasible option.  Perhaps we could simulate drunkenness – or turn it off – I suspect that would need massive but non-specific intervention, so it might be relatively achievable. Simulation of the effects of other drugs might be viable on similar terms, whether to impair performance, enhance it, or purely for pleasure. We might perhaps be able to stimulate paranoia, exhilaration, religiosity or depression, albeit without fully predictable results.

Indirect manipulation is the next easiest option for mind control; we might arrange, for example, to have a flood of good feelings or fear and aversion every time particular political candidates are seen, for example; it wouldn’t force the subject to vote a particular way but it might be heavily influential. I’m not sure it’s a watertight technique as the human mind seems easily able to hold contradictory attitudes and sentiments and widespread empirical evidence suggest many people must be able to go on voting for someone who appears repellent.

Could we, finally, take over the person themselves, feeding in whatever thoughts we chose? I rather doubt that this is ever going to be possible. True, our mental selves must ultimately arise from the firing of neurons, and ex hypothesi we can control all those neurons; but the chances are there is no universal encoding of thoughts; we may not even think the same thought with the same neurons a second time around. The fallback of recording and playing back the activity of a broad swathe of brain tissue might work up to a point if you could be sure that you had included the relevant bits of neural activity, but the results, even if successful, would be more like some kind of  malign mental episode than a smooth take over of the personality. Easier, I suspect, to erase a person than control one in this strong sense. As Hamlet pointed out, knowing where the holes on a flute are doesn’t make you able to play a tune. I can hardly put it better than Shakespeare…

Why, look you now, how unworthy a thing you make of
me! You would play upon me; you would seem to know
my stops; you would pluck out the heart of my
mystery; you would sound me from my lowest note to
the top of my compass: and there is much music,
excellent voice, in this little organ; yet cannot
you make it speak. ‘Sblood, do you think I am
easier to be played on than a pipe? Call me what
instrument you will, though you can fret me, yet you
cannot play upon me.

introspection2We don’t know what we think, according to Alex Rosenberg in the NYT. It’s a piece of two halves, in my opinion; he starts with a pretty fair summary of the sceptical case. It has often been held that we have privileged knowledge of our own thoughts and feelings, and indeed of our own decisions; but the findings of Benjamin Libet about decisions being made before we are aware of them; the phenomenon of blindsight which shows we may go on having visual knowledge we’re not aware of; and many other cases where it can be shown that motives are confabulated and mental content is inaccessible to our conscious, reporting mind; these all go to show that things are much more complex than we might have thought, and that our thoughts are not, as it were, self-illuminating. Rosenberg plausibly suggests that we use on ourselves the kind of tools we use to work out what other people are thinking; but then he seems to make a radical leap to the conclusion that there is nothing else going on.

Our access to our own thoughts is just as indirect and fallible as our access to the thoughts of other people. We have no privileged access to our own minds. If our thoughts give the real meaning of our actions, our words, our lives, then we can’t ever be sure what we say or do, or for that matter, what we think or why we think it.

That seems to be going too far.  How could we ever play ‘I spy’ if we didn’t have any privileged access to private thoughts?

“I spy, with my little eye, something beginning with ‘c'”
“Is it ‘chair’?”
“I don’t know – is it?”

It’s more than possible that Rosenberg’s argument has suffered badly from editing (philosophical discussion, even in a newspaper piece, seems peculiarly information-dense; often you can’t lose much of it without damaging the content badly). But it looks as if he’s done what I think of as an ‘OMG bounce’; a kind of argumentative leap which crops up elsewhere. Sometimes we experience illusions:  OMG, our senses never tell us anything about the real world at all! There are problems with the justification of true belief: OMG there is no such thing as knowledge! Or in this case: sometimes we’re wrong about why we did things: OMG, we have no direct access to our own thoughts!

There are in fact several different reasons why we might claim that our thoughts about our thoughts are immune to error. In the game of ‘I spy’, my nominating ‘chair’ just makes it my choice; the content of my thought is established by a kind of fiat. In the case of a pain in my toe, I might argue I can’t be wrong because a pain can’t be false: it has no propositional content, it just is. Or I might argue that certain of my thoughts are unmediated; there’s no gap between them and me where error could creep in, the way it creeps in during the process of interpreting sensory impressions.

Still, it’s undeniable that in some cases we can be shown to adopt false rationales for our behaviour; sometimes we think we know why we said something, but we don’t. I think by contrast I have occasionally, when very tired, had the experience of hearing coherent and broadly relevant speech come out of my own mouth without it seeming to come from my conscious mind at all. Contemplating this kind of thing does undoubtedly promote scepticism, but what it ought to promote is a keener awareness of the complexity of human mental experience: many layered, explicit to greater or lesser degrees, partly attended to, partly in a sort of half-light of awareness… There seem to be unconscious impulses, conscious but inexplicit thought; definite thought (which may even be in recordable words); self-conscious thought of the kind where we are aware of thinking while we think… and that is at best the broadest outline of some of the larger architecture.

All of this really needs a systematic and authoritative investigation. Of course, since Plato there have been models of the structure of the mind which separate conscious and unconscious, id, ego and superego: philosophers of mind have run up various theories, usually to suit their own needs of the moment; and modern neurology increasingly provides good clues about how various mental functions are hosted and performed. But a proper mainstream conception of the structure and phenomenology of thought itself seems sadly lacking to me. Is this an area where we could get funding for a major research effort; a Human Phenomenology Project?

It can hardly be doubted that there are things to discover. Recently we were told, if not quite for the first time, that a substantial minority of people have no mental images (although at once we notice that there even seen to be different ways of having mental images). A systematic investigation might reveal that just as we have four blood groups, there are four (or seven) different ways the human mind can work. What if it turned out that consciousness is not a single consistent phenomenon, but a family of four different ones, and that the four tribes have been talking past each other all this time…?