Electromagnetic

Picture: Susan Pockett.
We have become used to hearing that novel theories of quantum mechanics might somehow account for consciousness. A theory which invokes only common or garden electromagnetism seems refreshingly simple by comparison – almost naively simple at first sight. But such is the hypothesis put forward by Susan Pockett in her book ‘The Nature of Consciousness’. The hypothesis can be briefly stated – ‘consciousness is identical with certain spatiotemporal patterns in the electromagnetic field’ .

The original inspiration for the theory apparently lies in the sense of cosmic oneness described in Hinduism and (rather an unexpected choice) Plato. Pointing out that the ancients had some handy ideas about atoms, Pockett suggests we could similarly look for ancestral wisdom as a starting point for an enquiry into consciousness. One possible clue, according to her, is that one can find in Hinduism (and Plato) the idea of a fundamental underlying unity, a universal consciousness. I don’t think anyone would have argued very much if she had claimed it was a common feature of mystical experience in virtually all religions, actually. This mystical element is clearly important to her, since it is brought back in to round off the book’s conclusion.

Bitbucket Oh great. Good to know we’re dealing with hard science again.

Well, actually we are. If you want scrupulous quotation of authoritative empirical evidence, you won’t be disappointed here. The theory may be mystically inspired, but being a practical New Zealander, brought up, as she says, to believe any problem can be solved with baler twine, Pockett quickly gives it a more concrete form. This universal consciousness, she asks – doesn’t that sound a bit like some kind of field? If consciousness were some kind of electromagnetic effect, it could be part of the universal electromagnetic field, and hence genuinely part of a cosmic unity. Now, you wouldn’t want it to lose its separate existence altogether, or even leak across to other people in the form of telepathy (at least, not obviously), but that’s OK. If we were talking about very low frequency fluctuations, in the 0-100 Hz range, they wouldn’t propagate very far and would remain pretty much isolated, barring the kind of jolt to the brain which would have significant physical effects in any case.

Is there any reason to think that the brain might work at this kind of frequency, and that if it does, that the relevant patterns vary in a way that matches the variation of conscious experience?

Before we can consider the match with conscious states, we need to be a bit clearer about the kind of consciousness we mean. Pockett is only talking about the kind of subjective, phenomenal consciousness we almost certainly share with animals – qualia, in fact, not articulate, decision-making, reflective consciousness. She identifies three different states of consciousness – waking, dreaming, and dreamless sleep – and suggests there might be grounds for accepting another – a half-way state between waking and sleep which she identifies with the kind of meditative trance mystics go in for. It appears that EEG evidence does indeed show various wave patterns at suitable frequencies for all three (or four) of these conscious states. So far so good.

The next step is to examine the evidence for covariance between these EEG patterns and conscious sensations. It’s characteristic of Pockett’s agreeably down-to-earth style that smell, for once, is tackled first and gets a fair share of the attention, along with hearing and vision. These chapters form an interesting survey in their own right, though the details don’t matter much from our point of view. Covariance turns out to be relatively easy to establish in simple organisms, but in human beings rather a lot of processing of the relevant data is required. Nevertheless, I think most people would be happy to accept the broad conclusion that there is, indeed, evidence for covariance in all three senses.

Covariance, of course, is not enough in itself to establish the truth of Pockett’s hypothesis. Most people, as she recognises, would be inclined to say that the relationship between conscious sensations and electromagnetic patterns is due to the fact that they both arise from the same patterns of neural activity. I don’t think Pockett has a knock-down argument against this one: essentially she thinks it is much easier to conceive of consciousness as ‘just being’ a shimmering electromagnetic field than ‘just being’ patterns of neural activity – she even seems to suggest that the latter view tends towards dualism – but I’m not convinced.

So why should we believe in the hypothesis? I think there are three main points. One is the ‘mystic unity’ argument. To find this appealing, you have to believe in some kind of cosmic unity of consciousness to begin with, of course. But even if you do, I’m not sure the theory backs it up as strongly as Pockett suggests. At one point she sums it up quite accurately by saying that the ‘spots’ of consciousness within the overall cosmic field are like the red spots on a spotted handkerchief, which, in a sense, confer the quality of redness on the handkerchief. But that doesn’t, except in a stretched or metaphorical sense, make it a red handkerchief or unite the spots in a handkerchief-wide redness. Pockett herself has to argue for isolation of individual consciousnesses in order to defend herself against suggestions that if her theory were true, we should all be telepathic, or disrupted by electromagnetic events around us. I don’t, ultimately, find myself tempted by Pockett’s suggestion that her ideas, similarly, mean that the Cosmos itself is conscious.

The second point is a claim that the theory solves the binding problem – how conscious experience appears unified although the data from different sense organs makes its way into the brain at different times. If everything feeds into a single overall electromagnetic pattern, unity is guaranteed. This is a tempting idea, and a real prize if it worked. I think in the final analysis it underrates what we already know about the importance of neural events to sensory experiences. Since the theory only deals with qualia-style consciousness, it also leaves us with a problem on our hands about how sensory data feed into the other, cogitative form of consciousness. Nevertheless, I don’t think the idea that electromagnetic effects are relevant here can be entirely dismissed.

The third, and most startling point is that if Pockett is right, it is possible in principle to recreate the patterns which constitute conscious experience without a brain at all. Conscious computers are the least of it – if this is right you can generate a conscious experience of, say, the colour orange, in empty space. Pockett presumes that if someone’s brain were moved to coincide with such a floating experience, the owner of the brain would indeed ‘have’ that experience. If Pockett could indeed find a practical way of ‘beaming’ chosen experiences into someone’s mind (without, presumably, the need to know any details about the particular brain involved) it would be a most dramatic vindication.

I’m not holding my breath, though – it just seems unlikely that the electromagnetic aspect of the brain could ever be so thoroughly divorced from the neural activity. This touches on a basic problem with the theory which comes out in a number of different ways. One of the objections discussed and dismissed in the book is that electromagnetic fields can’t do computation. Now, as a matter of fact I think the objection, as stated, is on dubious grounds anyway. It isn’t clear to me that the kind of consciousness under discussion – qualia, subjective experience – is meant, by those who espouse it, to be computational anyway. But there is, I think, a problem about causal relations in the electromagnetic theory. It seems a little odd to think of electromagnetic patterns causing other electromagnetic patterns without physical objects – neurons, in fact – playing a role somewhere. The implication is either that some compromise with the neural perspective is needed (I think there are a number of reasonable options along these lines), or that the kind of consciousness under discussion is epiphenomenal – has no causal relevance. This latter view is, of course, virtually the orthodox one among qualists, but it involves considerable difficulty.

Pockett herself published a paper in the JCS recently declaring for epiphenomenalism, though whether she means the (indefensible in my view) hard philosophical version, or the unproblematic psychological version is still, I think, open to discussion.

I’m not convinced, but pending the arrival of an electromagnetic conscious-experience-synthesising machine, I think the hypothesis at least remains among the small and praiseworthy company of ideas about consciousness which are rational, clear, and in principle testable.

Blandula Ah – excuse me, but doesn’t it miss the whole point of mystical religious experience (in a typically flat-footed Western materialist way) to explain it as being a lot of radio waves? Isn’t transcendence of the physical something to do with it…?

Colourless Green Gavagai

Picture: colorless green ideas sleep furiously. Blandula Strange, really, that the best known sentence Noam Chomsky ever wrote is probably the one which wasn’t supposed to mean anything. In ‘Syntactic structures’ (1957) he pointed out that while neither

  • Colorless green ideas sleep furiously, or
  • Furiously sleep ideas green colorless

means anything, we can easily see that the first is a valid sentence, while the second is not. Since neither sentence had ever appeared in any text until then, statistical analysis of language won’t help us tell which of them is more likely to occur in normal discourse, he said.

Blandula Strictly, the statistical point appears to be wrong – we can, in fact, assess the relative probability of sentences which have never occurred. Be that as it may, the thing that really caught people’s imagination was the grammatical but meaningless sentence. Was it really meaningless? Some thought they could see a kind of poetic meaning in it. Some people at Stanford had a competition which produced a number of poetic examples, and there is at least one other piece of verse. Resorting to poetry makes thing too easy, though. A more challenging exercise might be to reinterpret the sentence as part of a crossword clue…

Clue: Wow, awful colorless,  green ideas sleep furiously (3,4,8)

Solution: Gee, dire paleness.

(‘Furiously’ indicates an anagram of ‘green ideas sleep’, and the result means (more or less) the same as ‘Wow, awful colorless’.)

Alright,  maybe not the best crossword clue ever. Without going to those lengths, we can easily imagine that the sentence might be part of a political essay…

Even before the fall of the Berlin Wall, it was known that ‘colorless Reds’ – covert Soviet agents – had taken up places as ‘sleepers’ in the Dubitanian government. With the benefit of hindsight, we can see that these agents were in fact among the government’s most reliable and least politicised employees. Disruption, direct action, and sabotage were far more likely to be the work of the extremist ecological factions who also infiltrated certain government departments. Once securely lodged inside the state, it seems, Communist ideology waits calmly for its chance. Colorless green ideas sleep furiously.

‘Green’ has so many relatively normal uses that it lends itself to different interpretations. The sentence could be about refurbishment of a golf course, abstract painting, or something new and half-baked. But we’re not limited to ringing the changes on ‘green’. With the right context, we can make other words – ‘idea’ for example – mean something slightly different, too.

The Studge advertising agency was desperate to win the Kumfypillo account. The creative department decided on a ‘rainbow’ workshop. In these sessions, each person was assigned a color and a corresponding role. When the box of equipment was opened, however, the green badge was missing, so Jenkins, the junior member of the team, had to be green without a color: he also got the hardest job, which was to come up with new creative angles, a process which at Studge was called ‘ideaing’. Imagine the scene; in the new chairs by the window blue and red are, respectively, critiquing and relating the concept of repose; at the front of the room yellow catalogues the properties of head-support, while standing awkwardly at the back, poor colorless green ideas sleep furiously.

Without wanting to labour the point, one can imagine an interpretation in which none of the words have their usual meanings, and all are used in a different grammatical role from the one you would expect. It’s actually a bit of a challenge to hold that many novel meanings in your mind at once, but…

‘This is the strangest editorial office I’ve ever worked in,’ said John, ‘I can’t understand half of what people say’

‘You don’t have to be mad to work here, son,’ observed Smith, ‘we can teach you all that. Let me run you through some of the slang. Now ‘color’ is pictures, so ‘colorless’ means text, or words, as in ‘Mark my colorless, son’. “Green” is for green light, means “OK”, “can do” as in “I green lunch today”. OK so far? Now at one time the boss had a habit of picking up some piece and saying ‘But what does it mean ? What are the ideas ?’. So if you want to ask what somebody means, you can say ‘what ideas there, then?’ Now “sleep” means, “relax”, “it’s correct”. So if someone asks me if I’m going out today, I can say “hey, sleep”, meaning “certainly”. One more. When we’re in a desperate hurry to finish, we generally just furiously chuck in whatever stuff we’ve got. So “furiously” means “whatever you’ve got”, or “anything”, like, if somebody asks me what I’m drinking and I don’t care, I’ll just say “Oh, furiously.”.

‘Good grief’, exclaimed John, ‘Words can mean really anything.’

‘That’s what I’m saying,’ answered Smith ‘Colorless green ideas sleep furiously’.

Bitbucket Gibberish. Alright, I admire your imagination, or perhaps it would be nearer the mark to say your dull persistence in wringing out permutations. But so what? Inventing a code in which each word stands for a completely different one is a futile pursuit, and tells us nothing about Chomsky.

Blandula I’m not trying to make a point about Chomsky. What I’m actually doing is suggesting that Quine was right with his story about ‘gavagai ’ – the word which seemed to mean ‘rabbit’, but could have meant virtually anything. For any word or set of words, there really are an infinite number of possible interpretations, and it follows that the meaning is always impossible to decode with any certainty.

Bitbucket Then how does anyone ever understand anyone else? Quine’s view is one of those theories that even the author doesn’t believe when it comes to real life.

Blandula Ah, but you see, the thing is, we don’t decode words mechanically, the way a computer would have to do it. We just see what somebody means – it’s a process of recognition, not calculation. Perhaps that has some implications for Chomsky after all.

Bats

Picture: Vampire. Blandula Nagel’s classic “What is it like to be a bat?”must be one of the most influential papers on consciousness of the last century, and it’s still very relevant.

Nagel’s aim is to launch a kind of counter-attack against physicalist arguments, which would reduce the mental to the merely physical, and which were evidently getting into the ascendant in 1974 when the paper was published. Tempting as it may be to fall back on the familiar kind of reductionist approach which has worked so well in other areas, Nagel argues, phenomenal, subjective experience is a special case. Reductive arguments always seek to give an explanation in objective terms, but the essential point about conscious experiences is that they are subjective. The whole idea of an objective account therefore makes no sense – no more sense than asking what my inward experiences are really like, as opposed to how they seem to me. How they seem to me is all there is to them. Any neutral, objective, third-person explanation has to leave out the essence of the experience. The point about conscious experience is that there is something it is like to see x, or hear y, or feel z.

Bitbucket Ah, ‘there is something it is like’ – the phrase that launched a thousand papers. Surely you realise that this is just an over-literal interpretation of the conventional phrase ‘what is it like?’. To assume that the ‘it’ in that question represents a real thing rather than a grammatical quirk is just silly.

Blandula Yes, I understand your point, but Nagel’s whole point is that ‘what it’s like’ is strictly inexpressible in objective terms. So it isn’t surprising that he has to resort to a back-handed way of getting you to see what he’s talking about. If he could describe it straightforwardly, he’d be contradicting his own theory.

Anyway. Nagel uses the example of a bat to dramatise his case – how can we know what it is like to be a bat, from the inside?

Bitbucket There’s a large rhetorical element in the choice of a bat. Bats have the traditional reputation of being a bit weird, and it’s known that some of them have a sense we don’t – echolocation. All this helps to persuade people that we can’t imagine what things are like from another point of view. But if Nagel is right, it should be equally hard to see things from the point of view of an identical twin. So let’s get the bats out of this particular belfry, OK?

Blandula Nagel’s entitled to use any example he likes. He explains that he chose bats because they’re close enough to human beings to leave most people in no doubt that they have conscious experiences of some kind, while far enough from us to dramatise his case. But whether you like it or not, it raises some fundamental issues. If Nagel is right, there are certain experiences – bat experiences, for example – that humans can never have. It follows that there are true facts about these experiences which humans can never grasp (although they can grasp that there must be facts of this kind. This general conclusion about the limits of human understanding must have been part of the inspiration for Colin McGinn’s wider theory that even human consciousness is ultimately beyond our understanding.

Bitbucket Yes, of course, since human beings are by definition not bats, they can’t have the experience of being a bat. But it does not follow that there are facts about bat experiences they can’t understand. You see, actually we can know what it’s like to be a bat. We can know what sizes of objects echolocation detects, and how the bat directs its ears and the stream of sound, and thousands of facts of that kind. We can know all about the kinds of information a bat’s senses supply, and with the right equipment we can experience echolocation ourselves at least by proxy.

I think the worst part of the paper is where Nagel says that even if we imagine ourselves turning into a bat, that won’t be any good. We’re just imagining what it would be like for us to be a bat, whereas we need to imagine what it’s like for a bat. This just reduces the whole thing to the trivial point that we can’t stop being us. Because if we did – it wouldn’t be us any more!

Blandula You just need to make the imaginative effort to see what he’s on about. Actually, the claim being made is quite modest in some respects. Nagel himself says that his argument doesn’t disprove physicalism. It would be nearer the truth to say that physicalism, the view that mental entities are physical entities, is a hypothesis we can’t even understand properly…

A Novel Theory

Picture:Dan Lloyd . I mentioned Dan Lloyd’s book “Radiant Cool” earlier. As I suggested then, linkages between philosophy and literature are not exactly unknown, but Lloyd’s is certainly the only book I know which splits neatly between a story in the front half and some really heavyweight stuff at the back.

The story is about Miranda Sharpe, who finds her Professor, Max Grue, slumped over his desk (unconscious?) when she creeps in to reclaim a folder. She leaves him there, only to find he has disappeared. Miranda has a series of encounters, with: Grue’s class; the arch-computationalist Clare Lucid; nerdy neural networker Gordon Fescue; Porfiry Marlov the former Soviet policeman and multidimensional scaling enthusiast; the menacing Zamm and Addit, who zap sections of her brain with Professor Cronkenstein’s transducer/activator; and finally, with ‘Dan Lloyd’. The mystery of what happened to Grue, the origins of the Chaos Bug which is currently infecting computers everywhere, and a few thoughts about consciousness, are cleared up along the way. It’s a readable narrative, with limited literary ambitions: you wouldn’t have been utterly surprised if Professor Plum had turned up in the library with a length of pipe at some point.

Some of Lloyd’s ideas get an exposition in ‘the thrill of phenomenology’, the first part of the book; a more systematic treatment, covering some additional ground appears in the second (and inevitably less readable) part, ‘ the real firefly’.

So what are these ideas? According to Lloyd, we have been too inclined to view people as ‘detectorheads’. Seeing the main function of neurons as detection has worked very well for a number of cognitive functions, but it won’t do for consciousness. Why not?

Lloyd presents a version of that old classic, the brain in a vat. If our brains were taken out of our bodies, they could still be fed with cunningly contrived signals which would give us the same experiences, phenomenologically, as we get from real life. We wouldn’t know the difference. We could be having the same experience of seeing a firefly whether in fact there is a real firefly out there or not. It follows that what we actually experience consciously isn’t the real world ‘out there’ at all (and consequently consciousness isn’t a matter of detection). I think this reasoning is mistaken. It’s true we can misinterpret our experiences, but that doesn’t mean that if our interpretation is wrong, our experiences are not really experiences of anything external.

Consider the mad scientists feeding our brains the signals which convince us we are still moving normally in the real world. Where do they get these extremely complex signals? By far the easiest way to get suitable signals would be to read them direct from reality with a camera and other suitable equipment. But in such a case, we obviously still are experiencing the real world – it’s just coming to us via the scientists’ recording apparatus. The scientists could use a small model world and tiny cameras instead, but that wouldn’t make a fundamental difference. We might be lulled into believing we were seeing the full-sized world, but the model is still a real, external thing. Now they could go further and use a computer model which existed only as data, they might even be able to devise a program which could generate appropriate signals without an explicit model: but however far they go along the path of abstraction we’ll still be detecting something outside our own brains. Our detectors may have been bamboozled and we may be quite wrong about what they’re detecting. But they’re not detecting nothing.

This doesn’t invalidate the rest of Lloyd’s account, however. He mentions two particularly important characteristics of consciousness – superposition and temporality – and offers a way of explaining them.

Superposition is the curious quality perceived objects have of being many different things at once. When you see a cup of coffee, you also see a container of liquid, a drink, somebody’s property, a cause of insomnia, and so on – and you see it as all these things simultaneously and immediately. How can this be? This is where the multidimensional scaling come in. If we like, we can treat each characteristic of a thing as a dimension. If we have the characteristic of redness, we imagine a line stretching from not red at all to utterly red: every object can be placed somewhere on this one-dimensional line according to its redness. Then we can construct an imaginary space out of these dimensions. Each position in this space will define a different combination of qualities, and hence, a different possible object. Now there are going to be an awful lot of dimensions involved in this imaginary space – Lloyd allows for billions (I think in fact that an infinite number of dimensions is required – and to complicate matters further, some characteristics are obviously related to others, rather than being capable of arbitrary independent variation). But that’s OK. The technique of multidimensional scaling apparently allows this very complex space to be boiled down to a much more comprehensible three dimensions, while preserving the distance relationships between salient points (with a certain amount of compromise). What this leaves (we hope) is a grouping of objects according to their resemblances and relationships. Hovering in the simplified 3-d space , dogs cats and fish will be close together because they are all animals; but fish will be a bit further off because they aren’t mammals, and they will also be part of another group, with apples and pancakes, because they are normal food items. Now, when you recognise something, you trigger some neural equivalent of this space which means you automatically see the thing you recognised in several different potential contexts at once. As you can tell, I have some reservations about the complex space apparatus here, but it does seem there is some gleam of light in the underlying idea.

When it comes to temporality, the influence of Husserl bulks large. According to Lloyd, every experience contains within it a strong sense of both past and future – retention and protention. To those with a good Husserlian background, this may seem evident, but things just don’t seem like that to me: some experiences have a strong temporal element, others don’t. According to Lloyd, the neural patterns which encode each experience of a given moment also encode, in a weakened form, the moment before and the moment after. Since the moment before itself contained a reflection of the moment before that, we have a kind of nesting effect.
To prove his point, Lloyd trained a neural network to predict the arrival of a ‘boop’ a fixed period after a ‘beep’, and then investigated it in considerable detail. It proved possible (using another neural network) to reconstruct earlier and later states of the network from any given point in the beep-boop process, a property not evident when it was simply running at random. Lloyd claims there is definite empirical evidence from scanning data that similar properties apply to the ‘neural networks’ in the brain. This tends to validate his view of the nature of consciousness as something suffused with temporality. Of course, using a task which is inherently about time – ie waiting the right length of time for the ‘boop’ – might be held to have biased the results.

Lloyd acknowledges scope for some reservations, and does not expect to have provoked the much-sought ‘aha!’ reaction of the reader who suddenly understands the whole thing at last. But he feels he’s made some pretty good progress. I’m not convinced he’s altogether on the right track, but the book is genuinely a useful prod towards some novel ways of thinking.

A lot of related material including some 3-d models and (apparently) an abortive exchange with Miranda Sharpe, can be found on Lloyd’s own website.