Only integrate

Picture:  Christof Koch. Consciousness, however, does not require language. Nor does it require memory. Or perception of the world, or any action in the world, or emotions, or even attention. So say Christof Koch and Giulio Tononi.

(Koch and Tononi? Sounds an interesting collaboration – besides their own achievements both in the past have been co-authors of grand panjandrums of the field – Koch worked with Francis Crick and Tononi with Gerald Edelman. They have some new ideas here, however.)

I don’t think everyone would agree that consciousness can be stripped down quite as radically as Koch and Tononi suggest. I actually find it rather tricky to get a good intuitive grasp of exactly what it is that’s left when they’ve finished: it seems to be a sort of contentless core of subjectivity. Particular items on the list of exclusions also seem debatable. We know that some unfortunate people are unable to create new memories, for example, but even in their case, they retain knowledge of what’s going on long enough to have reasonable short-term conversations. Could consciousness exist at all if the contents of the mind slid away instantly? Or take action and perception; some would argue that consciousness requires embodiment in a dynamic world; some indeed would argue that consciousness is irreducibly social in character. Again we know that some unlucky people have been isolated from the world by the loss of their senses and their ability to move, without thereby losing consciousness: but these are all people who had perception and action to begin with. Actually Koch and Tononi allow for that point, saying that whether consciousness requires us to have had certain faculties at some stage is another question; they merely assert that ongoing present consciousness doesn’t require them.

Picture: iulio Tononi. The most surprising of their denials is the denial of attention – some people would come close to saying that consciousness was attention. If there were no perception, no memory , and no attention one begins to wonder how consciousness could have any contents: they couldn’t arrive via the senses; they couldn’t be called up from past experience; and they couldn’t even be generated by concentrating on oneself or the surrounding nullity.

However, let’s not quibble. Consciousness has many meanings and covers a number of related but separable processes. The philosophical convention is that he who propounds the thesis gets to dictate the definitions, so if Koch and Tononi want to discuss a particularly minimal form, they’re fully entitled to do so.

What, then, do they offer as an alternative essence of consciousness? In two words, integrated information. A conscious mind will require many – hugely many – possible states; but, they argue, this is quite different from the case of a digital camera. A 1 megapixel camera has lots of possible states (one for every frame of every possible movie, they suggest – not sure that’s strictly true since some frames will show identical pictures, not that that makes a real difference); but these states are made up of lots of photodiodes whose states are all fully independent, By contrast, states of the mind are indivisible; you can’t choose to see colours without shapes, or experience only the left side of your visual field.

This sounds like an argument that consciousness is not digital but irreducibly analogue, though Koch and Tononi don’t draw that conclusion explicitly, and I don’t think either of them plan to put away their computer simulations. We can, of course, draw a distinction between consciousness itself and the neural mechanisms that support it, so it could be that integrated, analogue conscious experience could be generated by processes which themselves are digital or at least digitizable.

At any rate, they call their theory the Integrated Information Theory (IIT); it suggests, they say, a way of assessing consciousness in a machine, a superior Turing Test (they seem to think the original has been overtaken by the performance of the chatterbot Alice – with due respect to Alice, that seems premature, though I have not personally tried out Alice’s Silver Edition). Their idea is that the candidate should be shown a picture and asked to provide a concise description; human beings will easily recognise, for example, that the picture shows an attempted robbery in a shop and many other details, whereas machines will struggle to provide even the most naive and physical descriptions of what is depicted. This would surely be an effective test, but why does integrated information provide the human facility at work here? It’s a key point about the theory that as well as elements of current experience being integrated into an indivisible whole, current experience is integrated with a whole range of background information: so the human mind can instantly bring into play a lot of knowledge the machine can’t access.

That’s all very well, but this concept of integration is beginning to look less like a simple mechanism and more like a magic trick. Koch and Tononi offer a measure of integrated information, which they call phi: it represents the reduction in uncertainty, expressed in bits, when a system enters a particular state. Interestingly, the idea is that high values of phi correspond with higher levels of consciousness on a continuous scale; so animals are somewhat less conscious than us on average; but it also must be possible to be more conscious than any human to date has ever been: in fact, there is no upper limit in theory to how conscious an entity might be. This is heady stuff which could easily lead on to theology if we’re not careful.

To illustrate this idea of the reduction of uncertainty, the authors compare our old friend the photodiode with a human being (suppose it to be a photodiode with only two states), When the lights go out, the photodiode eliminates the possibility of White, and achieves the certainty of Black. But for the human being, darkness eliminates a huge range of possibilities that you might have been seeing; a red screen, a blue one, a picture of the statue of Liberty, a picture of your child’s piano recital, and so on. So the value of phi, the reduction in uncertainty, for human beings is much vaster than that for the photodiode. Of course measuring phi is not going to be straightforward; Koch and Tononi say that for the moment it can only be done for very simple systems.
That’s fair enough, but one aspect of the argument seems problematic to me. The reduction of uncertainty on this argument seems to be, not vast, but infinite. When the lights went out in your competition with the photodiode, you could have been seeing a picture of one grain of sand; you could have been seeing a picture of two grains of sand, and so on for ever. So it seems the value of phi for all conscious states is infinite. Hang on, though; is even darkness that simple? Remembering that Koch and Tononi’s integration is not limited to current experience, the darkness might cause you to think of what it’s like when you’re about to fall asleep; an unexpected hiatus in the cinema eighteen months ago: a black cat in a coal cellar at midnight. In fact, it might seem like the darkness which enshrouds a single grain of sand; the indefinably bulkier darkness which hides two grains of sand…  So don’t both states involve an infinite amount of information, so that the value of phi for all states of consciousness is zero?

I think Koch and Tononi have succeeded in targeting some of the real fundamental problems of consciousness, and made a commendably bold and original attack on them. But they will need a very clear explanation of how their kind of integration works if IIT is really going to fly.

Language and Consciousness

Picture:  Jordan Zlatev. There is clearly a close relationship between consciousness and language. The ability to conduct a conversation is commonly taken as the litmus test of human-style consciousness for both computers and chimpanzees, for example. While the absence of language doesn’t prove the absence of consciousness – not all of our thoughts are in words – the lack of a linguistic capacity seems to close off certain kinds of explicit reflection which form an important part of human cognition. Someone who had no language at all might be conscious, but would they be conscious in quite the same way as a normal, word-mongering human?
It might therefore seem that when Jordan Zlatev asserts the dependence of language on consciousness, he is saying something uncontroversial. In fact, he has both broader and more specific aims: he wants to draw more attention to the relationship on the one hand, and on the other readjust our view of where the borders between conscious and unconscious processes lie.

It seems pretty clear that a lot of the work the brain does on language is unconscious. When I’m talking, I don’t, for example, have to think to myself about the grammar I’m using (unless perhaps I’m attempting a foreign language, or talking about some grammatical point). I don’t even know how my brain operates English grammar; it surely doesn’t use the kind of rules I was taught at school; perhaps in some way it puts together Chomskyan structures, or perhaps it has some altogether different approach which yields grammatical sentences without anything we would recognise as explicit grammatical rules. Whatever it does, the process by which sentences are formed is quite invisible to me, the core entity to whom those same sentences belong, and whose sentiments they communicate. It seems natural to suppose that the structure of our language is provided pre-consciously.

Zlatev, however, contends that the rules of language are social and normative; to apply them we have to understand a number of conventions about meanings and usage; and whatever view we may take of such conventions their use requires a reflective knower (Zlatev picks up on a distinction set out by Honderich between affective, perceptual, and reflective consciousness; it’s the latter he is concerned with). To put it another way, operating the rules of language requires us to make judgements of a kind which only reflection can supply, and reflection of this kind deserves recognition as conscious. Zlatev is not asserting that the rules of grammar at work in our brain are consciously know after all: he draw a distinction between accessibility and introspectability; he wants to say that the rules are known pre-theoretically, but not unconsciously.

Perhaps we could put Zlatev’s point a different way: if the rules of language were really unconscious, we should be incapable of choosing to speak ungrammatically, just as we are incapable of making our heart beat slower or our skin stop sweating by an act of will. Utterances which did not follow the rules would be incomprehensible to us. In fact, we can cheerfully utter malformed sentences, distinguish them from good ones and usually understand both. Deliberate transgressions of the rules are used for communicative or humorous effect (a rather Gricean point). While the theory may be hidden from introspection, the rules are accessible to conscious thought.

If the rules of language were unconscious, asks Zlatev, how would we account for the phenomenon of self-correction, in which we make a mistake, notice it, and produce an emended version? And how could it be that the form of our utterances is often structured to enhance the meaning and distribute the emphasis in the most helpful way? An unconscious sentence factory could never support our conscious intentions in such a nicely judged way. Zlatev also brings forward evidence from language acquisition studies to support his claim that unconscious mechanisms may support, but do not exhaust, the processes of language.

At times Zlatev seems to lean towards a kind of Brentano-ish view; language requires intentionality, and nothing but consciousness can provide it (alas, in a way which remains unexplained). Intriguingly, he says that he and others were deceived into accepting the unconsciousness of language production at an earlier stage by the allure of connectionism, whose mechanistic nature only gradually became clear. I think connectionists might feel this is a little unfair, and that Zlatev need not have given up on connectionist approaches to reflective judgement simply because they are ‘mechanistic’.

All in all, I think Zlatev offers some useful insights; his general point that a binary division between conscious and unconscious simply isn’t good enough is indeed a good one and well made. I wonder whether this is a point particular to the language faculty, however. Couldn’t I make some similar points about my tennis-playing faculty? Here too I rely on some unconscious mechanisms, and couldn’t tell you exactly which arm muscles I used in which way. Yet making my hand twist the racquet around and move it to the right place also seems to require some calculated, dare I say reflective, judgements and the way I do it is exquisitely conditioned by tactics and strategy which I devise and entertain at a fully self-conscious level.

Be that as it may, it’s bad news for the designers of translation software if Zlatev is right, since their systems will have to achieve real consciousness before they can be perfected.

Alien Consciousness

Picture: alien.

Picture: Blandula. I was reading somewhere about SETI and I was struck by the level of confidence the writer seemed to enjoy that we should be able, not only to recognise a signal from some remote aliens, but actually interpret it. It seems to me, on the contrary, that finding the signal is the ‘easy problem’ of alien communication. We might spend much longer trying to work out what they were saying than we did finding the message in the first place. In fact, it seems likely to me that we could never interpret such a signal at all.

Picture: Bitbucket. Well, I don’t think anyone underestimates the scope of the task, but you know it can hardly be impossible. For example, we send off a series of binary numbers; binary is so fundamental, yet so different from a random signal, that they would be bound to recognise it. The natural thing for them to do is echo the series back with another term added. Once we’ve got onto exchanging numbers, we send them, like say 640 and 480 over and over. If they’re sophisticated enough to send radio signals, they’re going to recognise we’re trying to send them the dimensions of a 2d array. Or we could go straight to 3D, whatever. Then we can send the bits for a simple image. We might do that Mickey Mouse sort of picture of a water molecule: odds are they’re water-based too, so they are bound to recognise it. We can get quite a conversation on chemistry going, then they can send us images of themselves, we can start matching streams of bits that mean words to streams of bits that mean objects, and we’ll be able to read and understand what they’re writing. OK, it’ll take a long time, granted, because each signal is going to take years to be delivered. But it’s eminently possible.

Picture: Blandula. The thing is, you don’t realise how many assumptions you’re making. Maybe they never thought of atoms as billiard balls, and to them methane is more important than water anyway. Maybe they don’t have vision and the idea of using arrays of bits to represent images makes no sense to them. Maybe they have computers that actually run on hexadecimal, or maybe digital computers never happened for them at all, because they discovered an analogue computing machine which is far better but unknown to us, so they’ve never heard of binary. But these are all trivial points. What makes you think their consciousness is in any way compatible with ours? There must be an arbitrarily large number of different ways of mapping reality; chances are, their way is just different to ours and radically untranslatable.

Picture: Bitbucket. Every human brain is wired up uniquely, but we manage to communicate. Seriously, if there are aliens out there they are the products of biological evolution – no, they are, that’s not just an assumption, it’s a reasonable deduction – and that alone guarantees we can communicate with them, just as we can communicate with other species on Earth up to the full extent of their capability. The thing is, they may be mapping reality differently, but it’s essentially the same reality they’re mapping, and that means the two maps have to correspond. I might meet some people who use Urgh, Slarm, and Furp instead of North and South; they might use cubits for Urgh distances, rods for Slarm ones, and holy hibbles for the magic third Furpian co-ordinate, but at the end of the day I can translate their map into one of mine. Because they are biological organisms, they’re going to know about all those common fundamentals: death and birth, hunger, strife, growth and ageing, food and reproduction, kinship, travel, rest; and because we know they have technology they’re going to know about mining and making things, electricity, machines – and communication. And that guarantees they ‘ll be able and willing to communicate with us.

Picture: Blandula. You see, even on Earth, even with our fellow humans, it doesn’t work. Look at all the ancient inscriptions we have no clue about. Ventris managed to crack Linear B only because it turned out to be a language he already knew; Linear A, forget about it. Or Meroitic. We have reams and reams of Meroitic writing, and the frustrating thing is, it uses Egyptian characters, so we actually know what it sounded like. You can stand in front of some long carved Meroitic screed and read it out, knowing it sounds more or less the way it was meant to; but we have no idea whatever what it means: and we probably never will.

Picture: Bitbucket. What you’re missing there is that the Cretans and the Meroitic people are never going to respond to our signals. The dialogue is half the point here: if we never get an answer, then obviously we’re not going to communicate.

Though I like to think that even if we picked up the TV signal of some distant race which had actually perished long before, we’d still have a chance of working it out because there’d just be more of it, and a more varied sample than your Egyptian hieroglyphs which let’s face it are probably all Royal memorials or something.

Picture: Blandula. Look, you argue that consciousness is a product of evolution. But what survival advantage does phenomenal experience give us – how do qualia help us survive? They don’t, because zombies could survive every bit as well without them. So why have we got them? It seems likely to me that we somehow got endowed with a faculty we don’t even begin to understand. One by-product of this faculty was the ability to stand back and deliberate on our behaviour in a more detached way; another happened to be qualia, just an incidental free gift.

Picture: Bitbucket. So you’re saying aliens might be philosophical zombies? They might have no real inner experience?

Somehow I knew it would come back to qualia eventually.

Picture: Blandula. More than that – I’m not just saying they might be zombies, but that’s one possibility, isn’t it?

Incidentally, I wonder what moral duty would we owe to creatures that had no real feelings? Would it matter how we behaved towards them…? Curious thought.

Picture: Bitbucket. Even zombies have rights, surely? Whatever the ethical theory, we can never be sure whether they actually are zombies, so you’d have to treat them as if they had feelings, just to be on the safe side.

Anyway, to be honest, I don’t think I like where you seem to be going with this.

Picture: Blandula. No, well the point is not that they might be zombies, but that instead of our faculty of consciousness, they might have a completely different one which nevertheless served the same purpose from an evolutionary point of view and had similar survival value. We’re the first animals on Earth to evolve a consciousness: it’s as if we were primitive sea creatures and have just developed a water squirt facility, making us able to move about in a way no other creature can yet do. But these aliens might have fins. They might have legs. You sit there blandly assuming that any mobile creature will want to match squirts with us, but it ain’t necessarily so.

Picture: Bitbucket. No, your analogy is incorrectly applied. I’m not saying they’d want to match squirts; I’m saying we’d be able to follow each other, or have races, or generally share the commonalities of motion irrespective of the different kit we might be using to achieve that mobility.

Picture: Blandula. Your problem here is really that you can’t imagine how an alien could have something that delivered the cognitive benefits of consciousness without being consciousness itself. Of course you can’t; that’s just another aspect of the problem; our consciousness conditions our view of reality so strongly that we’re not capable of realising its limitations.

Picture: Bitbucket. Look, if we launch a missile at these people, they’ll send us a signal, and that signal will mean Stop. It’s those cognitive benefits you dismiss so lightly that I rest my case on. For practical purposes, about practical issues, we’ll be able to communicate; if they have extra special 3d rainbow qualia, or none, or some kind of ineffable quidlia that we can never comprehend – I don’t care. You might have alien quidlia buzzing round your head for all I know; that possibility doesn’t stop us communicating. Up to a point.

Picture: Blandula.

You know that Wittgenstein said that if a lion could speak, we couldn’t understand what he was saying?

.

Picture: Bitbucket. Yes, I do know; just one of many occasions when he was talking balls. In fairness to old Wittless he also said ‘If I see someone writhing in pain with evident cause I do not think; all the same, his feelings are hidden from me.’

And the same goes for writhing aliens.

The Schemata of Ouroboros

Picture: Ouroboros. Knud Thomsen has put a draft paper (pdf) describing his ‘Ouroboros Model’ – an architecture for cognitive agents – online. It’s a resonant title at least – as you may know, Ouroboros is the ‘tail-eater’; the mythical or symbolic serpent which swallows its own tail, and in alchemy and elsewhere symbolises circularity and self-reference.

We should expect some deep and esoteric revelations, then; but in fact Thomsen’s model seems quite practical and unmystical. At its base are schemata, which are learnt patterns of neuron firing, but they are also evidently to be understood as embodying scripts or patterns of expectations. I take them to be somewhat similar to Roger Schank’s scripts, or Marvin Minsky’s frames. Thomsen gives the example of a lady in a fur coat; when such a person enters our mind, the relevant schema is triggered and suggests various other details – that the lady will have shoes (for that matter, indeed, that she will have feet). The schemata are flexible and can be combined to build up more complex structures.

In fact, although he doesn’t put it quite like this, Thomsen’s model assumes that each mind has in effect a single grand overall schema unrolling within it. As new schemas are triggered by sensory input, they are tested for compatibility with the others in the current grand structure through a process Thomsen calls consumption analysis. Thomsen sees this as a two-stage process – acquisition, evaluation, acquisition, evaluation. He seems to believe in an actual chronological cycle which starts and stops, but it seems to me more plausible to see the different phases as proceeding concurrently for different schemata in a multi-threaded kind of way.

Thomsen suggests this model can usefully account for a number of features of normal cognitive processes. Attention, he suggests, is directed to areas where there’s a mismatch between inputs and current schemata. It’s certainly true that attention can be triggered by unexpected elements in our surroundings, but this isn’t a particularly striking discovery, or one that only a cyclical model can account for – and it doesn’t explain voluntary direction of attention, nor how attention actually works. Thomsen also suggests that emotions might primarily be feedback from the consumption analysis process. The idea seems to be that when things are matching up nicely we get a positive buzz, and when there are problems negative emotions are triggered. This doesn’t seem appealing. For one thing, positive and negative reinforcement is at best only the basis for the far more complex business of emotional reactions; but more fatally it doesn’t take much reflection to realise that surprises can be pleasant and predictability tediously painful.

More plausibly, Thomsen claims his structure lends itself to certain kinds of problem solving and learning, and to the explanation of certain weaknesses in human cognition such as priming and masking, where previous inputs condition our handling of new ones. He also suggests that sleep fits his model as a time of clearing out ‘leftovers’ and tidying data. The snag with all these claims is that while the Ouroboros model does seem compatible with the features described, so are many other possible models; we don’t seem to be left with any compelling case for adopting the serpent rather than some other pattern-matching theory. The claim that minds have expectations and match their inputs against these expectations is not new enough to be particularly interesting: the case that they do it through a particular kind of circular process is not really made out.

What about consciousness itself? Thomsen sees it as a higher-order process – self-awareness or cognition about cognition. He suggests that higher order personality activation (HOPA) might occur when the cycle is running so well there is, as it were, a surfeit of resources; equally it might arise when when a particular concentration of resources comes together to deal with a particularly bad mismatch. In between the two, when things are running well but not flawlessly, we drift on through life semi-automatically. In itself that has some appeal – I’m a regular semi-automatic drifter myself – but as before it’s hard to see why we can’t have a higher-order theory of consciousness – if we want one – without invoking Thomsen’s specific cyclical architecture.

In short it seems to me Thomsen has given us no great reason to think his architecture is optimal or especially well-supported by the evidence; however, it sounds at least a reasonable possibility. In fact, he tells us that certain aspects of his system have already worked well in real-life AI applications.

Unfortunately, I see a bigger problem. As I mentioned, the idea of scripts is not at all new. In earlier research they have delivered very good results when confined to a limited domain – ie when dealing with a smallish set of objects in a context which can be exhaustively described. Where they have never really succeeded to date is in producing the kind of general common sense which is characteristic of human cognition; the ability to go on making good decisions in changed or unprecedented circumstances, or in the seething ungraspable complexity of the real world. I see no reason to think that the schemata of Ouroboros are likely to prove any better at addressing these challenges.

Update 16 July 2008: Knud Thomsen has very kindly sent the following response.

I feel honored by the inclusion of my tiny draft into this site!

One of my points actually is that any account of mind and consciousness ought to be “quite practical and un-mystical”. The second one, of course is, that ONE single grand overall PROCESS – account can do it all (rather than three stages: acquisition, evaluation, action…). This actually is the main argument in favor of the Ouroboros Model: it has something non-trivial to say about a vast variety of topics, which commonly are each addressed in separate models, which do not know anything about each other, not to mention that they together should form a coherent whole. The Ouroboros Model is meant to sketch an “all-encompassing” picture in a self-consistent, self-organizing and self-reflexive way.

Proposals for technical details of neuronal implementation certainly explode the frame of this short comment; no doubt, reentrant activity and biases in thalamo-cortical loops will play a role. Sure, emotional details and complexities are determined by the content of the situation; nevertheless, a most suitable underlying base could be formed by the feedback on how expectations come true. Previously established emotional tags would be one of the considered features and thus part of any evaluation, – “inherited”, partly already from long-ago reptile ancestors.

The Ouroboros Model offers a simple mechanism telling to what extent old scripts are applicable, what context seems the most adequate and when schemata have to be adapted flexibly and in what direction.

Consciousness as Hardware?

Picture: hardware consciousness. Jan-Markus Schwindt put up a complex and curious argument against physicalism in a recent JCS: one of those discussions whose course and conclusion seem wildly wrong-headed, but which provoke interesting reflections along the way.

His first point, if I followed him correctly, was that physics basically comes down to maths. In the early days, scientists thought of themselves as doing sums about some basic stuff which was out there in the world; as the sums and the theories behind them got more sophisticated and comprehensive, there was less and less need for the stuff-in-itself; the equations were really providing the whole show. In the end, the inescapable conclusion is that the world described by physics is a mathematical structure.

Schwindt quotes several scientists along these lines, such as Eddington saying ‘We have chased the solid substance from the continuous liquid to the atom, from the atom to the electron, and there we have lost it’. There’s no doubt, of course that maths is indeed the language, or perhaps the soul, of physics. Hooke claimed that Newton had stolen the idea of the inverse square law of gravity from him; but Christopher Wren gently remarked that he too had thought of the idea independently; so had many others he could name; the point was that only Newton could do the maths which turned it into a verifiable theory. Thes days, of course, it’s notoriously hard to describe the ‘stuff’ that modern quantum physics is about in anything other than mathematical terms, and one respectable point of view is that there’s no point in trying.

But I don’t think physicalists are necessarily committed to the view that the world is mathematical to the core. In fact mathematicians have a noticeable tendency to be Platonists, believing that numbers and mathematical entities have a real existence in a world of their own. This is a form of dualism, and hence in opposition to physicalism, but very different to Schwindt’s point of view – instead of merging the physical into the mathematical, it keeps the precious formulae safe and unsullied in an eternal world of their own.

Moreover, at the risk of sounding like a bad philosopher, it all depends what you mean by mathematics. Numbers and formulae can be used in more than one way. n=1 is a testable statement in arithmetic; in many programming languages it’s true by fiat – it makes the variable n equal one. In Searlian terms, the direction of fit is different, or to put it less technically, in arithmetic it’s a statement, in programming an instruction. Now the kind of mathematical laws which hypothetically sustain the world would have to have the same direction of fit as the programming instruction: that is, it would be up to the world to resemble the law rather than up to the law to resemble the world. In fact they would require some more advanced metaphysical power than any mere high-level programming language; the sort of force which Adam’s words are supposed to have had in the garden of Eden, where his naming of a beast determined its very essence. If we could explain how apparently arbitrary laws of physics come to have this power over reality, it would be an unprecedented advance in metaphysics and the philosophy of science.

So when Schwindt tells us that the world is a mathematical structure, he may be right if he means mathematical in the sense of embodying a set of mysteriously potent ontological commands: but that’s just as complex and difficult as a world where the laws of physics are mere descriptions and the driving force of ontology is hidden away in ineffable stuff-in-itself. If, on the other hand, he means the world is reducible to a mathematical description, I think he’s mistaken – or at the least, he needs another argument to show how it could be.

For a second key point, Schwindt draws on an argument against functionalism proposed by Zuboff. This says that from a functionalist point of view, consciousness just consists of the firing of a certain pattern of neurons. But look, we could take out one neuron, put it in a dish somewhere, and then hand-feed the outputs and inputs between it and the rest of the brain in such a way that it functioned normally. Surely the relevant state of consciousness would be unaffected? If you’re happy with that, then you must accept that we could do the same with all the neurons in a brain. But if that’s so, we could put together a set of neurons which actually belong to different brains, but which together instantiate the right pattern for a particular experience, even though they don’t belong to any physically coherent individual? But surely this is absurd – so functionalism must be false.

Schwindt accepts this argument but gives it a different spin: for there to be patterns, he suggests, including patterns of neuron firing, there has to be an interpreter: someone who sees them as patterns. Since the world as described by physics is a mathematical structure, and since no mere mathematical structure can be an interpreter, consciousness requires something over and above the normal physical account. In fact, he proposes that consciousness is non-physical, and in effect the hardware which (if I’ve got this right) generates experience out of the mathematical structure of the physical world. This reasoning seems to lead naturally towards dualism, but Schwindt wants to stop just short of that: there seem to be two alternative reductions of the world on offer, he says, reasonably enough: the physicalist one and another which reduces everything to direct experience; but ultimately we can hope that there is some as yet unknown level on which monism is reasserted.

Alas, I think the Zuboffian argument (I haven’t read the original, so I rely on Schwindt’s account) fails because credible forms of functionalism don’t just require a pattern of neuron firing to exist: they require a relevant pattern of causal relations between neuron firing. As soon as hand-simulation enters the picture, all bets are off.

I think one reason Schwindt takes such an unlikely route is that he goes somewhat overboard in asserting the primacy of direct experience: he’s quite right that in one sense it’s all we’ve got; but like the idealists, he is in danger of mistaking an epistemological for an ontological primacy. It won’t come as any surprise, in any case, that I don’t really see how consciousness can be likened to hardware. Consciousness is content; arguably that’s all it is: whereas hardware is what gets left behind when you take the content out of a brain (or a computer). Isn’t it? Schwindt goes further and within consciousness has qualia playing a role analogous to a monitor, but I found that idea more confusing than illuminating.

Could consciousness be hardware? I can’t reject the idea: but that’s only because I don’t think I can properly grasp it to begin with.

Unconscious decisions

Picture: hourglasses. Benjamin Libet’s experimental finding that decisions had in effect already been made before the conscious mind became aware of making them is both famous and controversial; now new research (published in a ‘Brief Communication’ in Nature Neuroscience by Chun Siong Soon, Marcel Brass, Hans-Jochen Heinze and John-Dylan Haynes) goes beyond it. Whereas the delay between decision and awareness detected by Libet lasted 500 milliseconds, the new research seems to show that decisions can be predicted up to ten seconds before the deciders are aware of having made up their minds.

The breakthrough here, like so many recent advances, comes from the use of fMRI scanning. Libet could only measure electrical activity, and had to use the Readiness Potential (RP) as an indicator that a decision had been made: the new research can go much further, mapping activity in a number of brain regions and applying statistical pattern recognition techniques to see whether any of it could be used to predict the subject’s decision.

The design of the experiments varied slightly from Libet’s original ingenious set-up. This time a series of letters was displayed on a screen. The subject were asked to press either a right or a left button at a moment chosen by them; they then identified the letter which had been displayed at the moment they felt themselves deciding to press either right or left. In the main series of experiments, no time constraints were imposed.

Two regions proved to show activity which predicted the subject’s choice: primary motor cortex and the Supplementary Motor Area (SMA) – the SMA is the source of the RP which Libet used in his research. In the SMA the researchers found activity which predicted the decision some five seconds before the moment of conscious awareness, but it was elsewhere that the earliest signs appeared – in the frontopolar cortex and the precuneus. Here the subject’s decision could be seen as much as seven seconds ahead of time: allowing for the delay in the fMRI response, this tots up to a real figure of ten seconds. One contrast with earlier findings is that there was no activation of the dorsolateral prefrontal cortex: the researchers hypothesise that this was because the design of their experiment did not require subjects to remember previous button presses. Another difference, of course, is the huge delay of five seconds in the SMA, which one would have expected to be comparable with the findings of earlier, RP-based research. Here the suggested explanation is that in the new experiments the timing of button presses was wholly unconstrained so that there was more time for activity to build up. The time delay in the fMRI study apparently means that the possibility that there was additional activity within the last few hundred milliseconds cannot be excluded: I conjecture that this offers another possible explanation if the RP studies actually detected a late spike which the fMRI couldn’t detect.

The experimenters also ran a series of experiments where the subject chose left or right at a pre-determined time: this does not seem to have shortened the delays, but it showed up a difference between the activation in the frontopolar cortex and the precuneus: briefly, it looks as if the former peaks at the earliest stage, with the precuneus ‘storing’ the decision through more continuous activation.

What is the significance of these new findings? The researchers suggest the results do three things: they show that the delay is not confined to areas which are closely associated with motor activity, but begins in ‘higher’ areas; they demonstrate clearly that the activity relates to identifiable decisions, not just general preparation; and they rule out one of the main lines of attack on Libet’s findings, namely that the small delay observed is a result of mistiming, error, or misunderstanding of the chronology. That seems correct – a variety of arguments of differing degrees of subtlety have been launched against the timings of Libet’s original work. Although Libet himself was scrupulous about demonstrating solid reasons for his conclusions, it always seemed that a delay of a few hundred milliseconds might perhaps be attributable to some sort of error in the book-keeping, especially since timing a decision is obviously a tricky business. A delay of ten seconds is altogether harder to explain away.

However, it seems to me that while the new results close off one line of attack, they reinforce another – the claim that these experiments do not represent normal decision making. We do not typically make random decisions at a random moment of our choosing, and it can therefore fairly be argued that the research has narrower implications than might appear, or even that they are merely a strange by-product of the peculiar mental processes the subjects were asked to undertake. While the delay was restricted to half a second, it was intuitively believable that all our normal decisions were subject to a similar time-lag – surprising, but believable. A delay of ten seconds in normal conscious thought is not credible at all; it’s easy to think of cases where an unexpected contingency arises and we act on it thoughtfully and consciously within much shorter periods than that.

The researchers might well bite the bullet so far as that goes, accepting that their results show only that the delay can be as long as ten seconds, not that it invariably is. Libet himself, had he lived to see these results might perhaps have been tempted to elaborate his idea of ‘free won’t’ – that while decisions build up in our brains for a period before we are aware of them, the conscious mind retains a kind of veto at the last moment.

What would be best of all, of course, is further research into decisions made in more real-life circumstances, though devising a way in which decisions can be identified and timed accurately in such circumstances is something of a challenge.

In the meantime, is this another blow to the idea of free will generally? The research will certainly hearten hard determinists, but personally I remain a compatibilist. I think making a decision and becoming aware of having made that decision are two different things, and I have no deep problem with the idea that they may occur at different times. The delay between decision and awareness does not mean the decision wasn’t ours, any more than the short delay before we hear our own voice means we didn’t intend what we said. Others, I know, will feel that this relegates consciousness to the status of an epiphenomenon.

More not the same

Picture: brain not computer. Chris Chatham has gamely picked up on my remarks about his interesting earlier piece, which set out a number of significant differences between brains and computers. We are, I think, somewhere between 90 and 100 percent in agreement about most of this, but Chris has come back with a defence of some of the things I questioned. So it seems only right that I should respond in turn and acknowledge some overstatement on my part.

Chris points out that “processing speed” is a well-established psychometric construct. I must concede that this is true: the term is used to cover various sound and useful measures of speed of recognition and the like: so I was too sweeping when I said that ‘processing speed’ had no useful application to the brain. That said, this kind of measurement of humans is really a measurement of performance rather than directly of the speed of internal working, as it would be when applied to computers. Chris also mentions some other speed constraints in the brain – things like rate of firing of neurons, speed of transmission along nerves – which are closer to what ‘processing speed’ means in computers (though not that close, as he was saying in the first place). In passing, I wonder how much connection there is between the two kinds of speed constraint in humans? The speed of performance of a PC is strongly affected by its clock speed; but do variations in rates of neuron firing have a big influence on human performance? It seems intuitively plausible that in older people neurons might take slightly longer to get ready to fire again, and that this might make some contribution to longer reaction times and the like (I don’t know of any research on the subject), but otherwise I suspect differences in performance speed between individual humans arise from other factors.

In a nutshell, Chris is right when he says that Peter is probably uncomfortable with equating “sparse distributed representation” with “analog,”. To me it looks like a whole nother thing from what used to go on in analog computers, where a particular value might be represented by a particular level of current. The whole topic of mental representation is a scary one for me in any case: if Chris wanted a twelfth difference to add to his list, I think he could add Computers don’t really do representation. That may seem an odd thing to say about machines that are all about manipulating symbols; but nothing in a computer represents anything except in so far as a human or humans have deemed or designed it to represent something. Whereas the human brain somehow succeeds in representing things to itself, and then to other humans, and manages to confer representational qualities on noises, marks on paper – and computers. This surely remains one of the bogglingly strange abilities of the mind, and it’s unlikely the computer analogy is going to help much in understanding it.

I do accept that in some intuitive sense the brain can be described as ‘massively parallel’ – people who so describe it are trying to put their finger on a characteristic of the brain which is real and important . But the phrase is surely drawn from massively parallel computing, which really isn’t very brain-like. ‘Parallel’ is a good way of describing how different bits of a process can be shepherded in an orderly manner through different CPUs or computers and then reintegrated; in the brain, it looks more as if the processes are going off in all directions, constantly interfering with each other, and reaching no definite conclusion. How all this results in an ordered and integrated progression of conscious experience is of course another notorious boggler, which we may solve if we can get a better grasp of how this ‘massively parallel’ – I’d rather say ‘luxuriantly non-linear’ way of operating works. My fear is that use of the phrase ‘massively parallel’ risks deluding us into thinking we’ve got the gist already.

Whatever the answer there, Chris’s insightful remarks and the links he provided have certainly improved my grasp of the gist of things in a number of areas, however, for which I’m very grateful.

Not the same

Picture: brain not computer. Chris Chatham has a nice summary of ten key differences between brains and computers which is well worth a read. Briefly, the list is:

  • Brains are analogue; computers are digital
  • The brain uses content-addressable memory
  • The brain is a massively parallel machine; computers are modular and serial
  • Processing speed is not fixed in the brain; there is no system clock
  • Short-term memory is not like RAM
  • No hardware/software distinction can be made with respect to the brain or mind
  • Synapses are far more complex than electrical logic gates
  • Unlike computers, processing and memory are performed by the same components in the brain
  • The brain is a self-organizing system
  • Brains have bodies
  • The brain is much, much bigger than any [current] computer

Actually eleven differences – there’s a bonus one in there.

Hard to argue with most of these, and there are some points among them which are well worth making. There is always a danger, when comparing the capacities of brains and computers, of assuming a similarity even when trying to point up the contrast. There have, for example, been many attempts to estimate the storage capacity of the brain, or the memory, over the years for example, always concluding that it is huge; but a figure in megabytes doesn’t really make much sense. Asking how many bytes there are in the memory is like asking how many pixels Leonardo needed to do the Mona Lisa: it’s not like that. Chris generally steers just clear of this danger, although I’d be more inclined to say, for example, that the concept of processing speed has no useful application in the brain rather than that it isn’t fixed.

I wonder a bit about some of the positive assertions he makes. Are brains analogue? Granted they’re not digital, at least not in the straightforward way that a digital computer is, but unless we take ‘analogue’ as a synonym for ‘non-digital’ it’s not really clear to me. I take digital and analogue to be two different ways of representing real-world quantities; I don’t think we really know exactly how the brain represents things at the moment. It’s possible that when we do know, the digital/analogue distinction may seem to be beside the point.
And are brains ‘massively parallel’? It’s a popular phrase, but one of the older bits of this site long ago looked at a few reasons why the resemblance to massively parallel computing seems slight. In fairness, when people make this assertion they aren’t really saying that the brain is like a parallel processing set-up; they’re trying to describe a quality of the brain for which there is no good word; ie that things seem to be going on all over it at the same time. Chris is really warning against an excessively modular view. Once again we can agree that the brain is unlike computers – this time in the way they funnel data and instructions together in one or more processors; but the positive comparison is more problematic.

There’s some underlying scope for confusion, too, about what we mean when we assert that brains are not computers. We could just intend the trivial point that there isn’t actually a physical PC in our heads. More plausibly, we could mean that the brain doesn’t have the same general architecture and functional features as a computer (which I think is about what Chris means to do). We could mean that the brain doesn’t do stuff that we could easily recognise as computation, although it might be functionally similar in the sense of producing the same output to input relationships as a computed process. We might mean it doesn’t do stuff that we could easily recognise as computation, and that there appears to be no non-trivial way of deriving algorithms which would do the same thing. We might go one further and assert, as Roger Penrose does, that some of what the brain is doing is non-computable in the same sort of way as the tiling problem (though here again we have to ask is it really like that since the question of computability seems to assume the brain is typically solving problems and seeking proofs). Finally, we could be saying that the brain has altogether mysterious properties of free will and phenomenal experience which go beyond anything in our current understanding of the physical world, and ergo far beyond anything a mere computer might possess.

A good thought-provoking discussion, in any case.

Four zomboids

Picture: zomboids. I’ve suggested previously that one of the important features of qualia (the redness of red, the smelliness of Gorgonzola, etc) is haecceity, thisness. When we experience redness, it’s not any Platonic kind of redness, it’s not an idea of redness, it’s that. When people say there is something it is like to smell that smell, we might reply, yes: that. That’s what it’s like. The difficulty of even talking about qualia is notorious, I’m afraid.

But it occurred to me that there was another problem, not so often addressed, which has the same characteristic: the problem of one’s own haecceity.

Both problems are ones that often occur to people of a philosophical turn of mind, even if they have no academic knowledge of the subject. People sometimes remark ‘For all we know, when I see the colour blue, you might be seeing what I see when I see red’, and similarly, thoughtful people are sometimes struck by puzzlement as to why they are themselves and not someone else. That particular problem has a trivial interpretation – if you weren’t you, it would be someone else’s problem – but there is a real and difficult issue related to the grand ultimate question of why there is anything at all, and why specifically this.

One of the standard arguments for qualia, of course, is the possibility of philosophical zombies, people who resemble us in every way except that they have no qualia. They talk and behave just like us, but there is nothing happening inside, no phenomenal experience. Qualophiles contend that the possibility of such zombies establishes qualia as real and additional to the normal physical story. Can we have personhood zombies, too? These would be people who, again, are to all appearances like us, but they don’t have any experience of being anyone, no sense of being a particular self. It seems at least a prima facie possibility.

That means that if we consider both qualia and selfhood, we have a range of four possible zomboid variants. Number one, not in fact a zombie at all, would have both qualia and the experience of selfhood – probably what the majority would consider the normal state of affairs. His opposite would have neither qualia nor a special sense of self, and that would be what a Dennettian sceptic takes to be the normal position. Number three has a phenomenal awareness of his own existence, but no qualia. This is what I would take to be the standard philosophical zombie. This is not really clear, of course: I assume the absence of discussion of the self in normal qualia discussion implies that zombies are normal in this respect, but others might not agree and some might even be inclined to regard the sense of self as just a specific example of a quale (there are, presumably, proprioceptive qualia, though I don’t think that’s what I’m talking about here), not really worthy of independent discussion.

It’s number four that really is a bit strange; he has qualia going on inside, but no him in him: phenomenal experience, but no apparent experiencer. Is this conceivable? I certainly have no knock-down argument, but my inclination is to say it isn’t: I’m inclined to say that all experiences have an experiencer just as surely as causes have effects. If that’s true, then it suggests the two cases of haecceity might be reducible to one: the thisness of your qualia is really just your own thisness as the experiencer (I hope you’re still with me here, reader). That in turn might mean we haven’t been looking in quite the right place for a proper account of qualia.

What if number four were conceivable? If qualia can exist in the absence of any subjective self-awareness, that suggests they’re not as tightly tied to people as we might have thought. That would surely give comfort to the panpsychists, who might be happy to think of qualia blossoming everywhere, in inanimate matter as well as in brains. I don’t find this an especially congenial perspective myself, but if it were true, we’d still want to look at personal thisness and how it makes the qualia of human beings different from the qualia of stones.

At this point I ought to have a blindingly original new idea of the metaphysics of the sense of self which would illuminate the whole question as never before. I’m afraid I’ll have to come back to you on that one.

Blue Brain – success?

Picture: Blue Brain. A bit of an update in Seed magazine on the Blue Brain project. This is the project that set out to simulate the brain by actually reproducing it in full biological detail down to the behaviour of individual neurons and beyond: with some success, it seems.

The idea of actually simulating a real brain in full has always seemed fantastically ambitious, of course, and in fact the immediate aim was more modest: to simulate one of the columnar structures in the cortex. This is still an undertaking of mind-boggling complexity: 10,000 neurons, 30 million synaptic connections, using 30 different kinds of ion channel. In fact it seems the ion channels were one of the problem areas; in order to get good enough information about them, the project apparently had to set up its own robotic research. I hope the findings of this particular bit of the project are being published in a peer-reviewed journal somewhere.

However, the remarkable thing is that it worked: eventually the simulated column was created and proved to behave in the same way as a real one. So is the way open for a full brain simulation? Not quite. Even setting aside the many structural challenges which surely remain to be unravelled (don’t they – the brain isn’t simply an agglomeration of neocortical columns?) Henry Markram, the project Director, estimates that an entire brain would require the processing of 500 petabytes of data, way beyond current feasibility. Markram believes that within ten years, the inexorable increase in computing power will make this a serious possibility. Maybe: it doesn’t pay to bet against Moore’s Law – but I can’t help noticing that there has been a big historical inflation in the estimated need, too. Markram now wants 500 petabytes: a single petabyte is 1015 bytes; but in 1950 Turing thought that 1015 bits represented the highest likely capacity of the brain, with about 109 enough for a machine which could pass the Turing Test. OK, perhaps not really a fair comparison, since Turing had nothing like Blue Brain in mind.

One criticism of the project asks how it judges its own success – or rather, suggests that the fact that it does judge its own success is a problem. If we had a full brain which could operate a humanoid robot and talk to us, there would be no doubt about the success of the project; but how do we judge whether a simulated neuronal column is actually working? The project team say that their conclusions are based on scrupulous comparisons with real biological brains, and no doubt that’s right; but there’s still a danger that the simulation merely confirms the expectations fed into it. They came up with an idea of how a column works; they built something that worked like that: and behold, it works how they think a column works.

There is also undeniably something a bit strange about the project. Before Blue Brain was ever thought of, proponents of AI would sometimes use the idea of a total simulation as a kind of thought-experiment to establish the merely neurological nature of the mind. OK, there might be all these mechanisms we didn’t understand, and emergent phenomena, and all the rest, but at the end of the day, what if we just simulated everything? Surely then you’d have to admit, we would have made an artificial mind – and what was to stop us, except practicality? It was an unexpected development back in 2005 when someone actually set about making this last-ditch argument a reality. It is unique; I can’t think of another case where someone set out to reproduce a biological process by building a fully detailed simulation, without having any theory of how the thing worked in principle.

This raises some peculiar possibilities. We might put together the full Blue Brain; it might be demonstrably performing like a human brain, controlling a robot which walked around and discussed philosophy with us, yet we still wouldn’t know how it did it. Or, worse perhaps, we might put it all together, see everything working perfectly at a neuronal level, and yet have our attached robot standing slack-jawed or rolling around in a fit, without our being able to tell why.

It may seem unfair to describe Markram and his colleagues as having no theory, but some of his remarks in the article suggest he may be one of those scientists who doesn’t really get the Hard Problem at all.

…It’s the transformation of those cells into experience that’s so hard. Still, Markram insists that it’s not impossible. The first step, he says, will be to decipher the connection between the sensations entering the robotic rat and the flickering voltages of its brain cells. Once that problem is solved—and that’s just a matter of massive correlation—the supercomputer should be able to reverse the process. It should be able to take its map of the cortex and generate a movie of experience, a first person view of reality rooted in the details of the brain…

It could be that Markram merely denies the existence of qualia, a perfectly respectable point of view; but it looks as if he hasn’t really grasped what they are, and that they can’t be captured on any kind of movie. Perhaps this outlook is a natural or even a necessary quality of someone running this kind of project. I suppose we’ll have to wait and see what happens when he gets his 500 petabyte capacity.