Animal problems

Picture: Minotaur. The latest edition of the JCS is devoted to personhood, a key issue which certainly deserves the attention. I’m afraid (or perhaps we should be glad?) there are a number of quite different and possibly incompatible perspectives on offer. I thought Eric T Olson’s animal problems, expounded in a careful and rather despairing paper (“What are we?”, were interesting, but not quite as bad as he thinks.

The issue addressed by the paper, as the title suggests, is that of the nature of human people. Is a person a substance or a process? Persistent over time? Composed of parts, spatially or temporally? Material or non-material? Are we, in fact, anything at all?

One possible stance is that we are simply animals. Probably most of us wouldn’t want to say that that is the end of the story, but Olson points out that there does seem to be, as it were, an animal corresponding to each of us. If we’re not the animal, and we do the thinking, what’s the animal doing? Thinking the same things in pre-established harmony? That seems strange – and if its thoughts are the same as ours, how could we be sure whose thoughts are which? Perhaps the animal doesn’t think – but why not: it has a brain and shows lots of signs of mental activity? The idea that the animal might have different thoughts of its own, not corresponding with ours, seems quite bizarre, and difficult to sustain given the way its behaviour matches our thoughts so well.

There’s something a little weird about this reasoning. If Olson chose to deny that the animal thinks, he would merely (!) be faced with the problem of why its behaviour matches our thoughts, which appears to be just a somewhat different slant on the classic mind-body problem as we know and love it.

But he doesn’t want to do that, and in fact he goes on to consider a re-application of similar reasoning. Because, after all, for each animal there is also a ‘lump of flesh’. The lump of flesh is intimately linked with the animal – if we were so inclined we might say that it constitutes it – but it can’t just be identified with the animal. After death the lump might persist in the absence of the animal, or given a partial carnal swap of some kind, the animal might survive longer than this particular lump. So again, then, is it the animal that does the thinking, the lump of flesh, or both?

I think the shift to another level give us the clue to the real nature of the problem. Aren’t we really just dealing with the well-established fact that the same thing can have different properties when described in different ways? This isn’t a unique feature of people or animals. My car may be referred to as the lump of metal in the drive, my prime means of transport, and my chief contribution to global warming, and it has slightly different properties in each case. But once understood, this sort of thing is not normally felt to be the kind of issue we need to lie awake at night fretting over.

There again, perhaps it’s a more bothersome issue when it applies to us. Our life seems to unfold on many different levels, yet consciousness seems undeniably single. Must we plump for one final way of describing ourselves in ourselves, with the others demoted to secondary status (is the subconscious where the real me is to be found?) or do we somehow have to think that our experienced unity is really a pandemonic chorus?

My personal inclination is to think that personhood resides in the single place where consciousness gives rise to agency, and that the animal (to describe myself in those unflattering terms) has, qua animal, a purely supporting role in that crucial process.

Haecceity and metempsychosis

Picture: Talking to myself. I keep an open mind on the subject of qualia, the ineffable redness of red and the equally ineffable smelliness of smell, and I always read new arguments on the subject with interest. But if I’m honest, in my heart of hearts I believe that the problem, as normally formulated, embodies a misunderstanding. That redness you see isn’t a mysterious quale; it’s just some real, extra-theoretical redness. When Mary sees something red for the first time in her life, she doesn’t gain new information or knowledge, she just has an experience she has never had before. We should not be puzzled over the fact that colour theories do not themselves contain real colours, and so cannot convey them into minds any more than knowledge about an experience conveys the experience itself.

I think we are right to be puzzled, nevertheless. There is a profound mystery here – it just isn’t the one we usually discuss. The redness of an apple is just real redness: but what on earth is that? What is real anything? When we try to distinguish reality from dreams, we often rely on epistemic arguments about the consistency and coherence of the real world, but those properties are not of the essence: there’s nothing to say a dream or delusion couldn’t by chance be consistent and sustained. The realness of reality must lie somewhere else, in some inscrutable region of meta-ontology.

Part of the problem, at least, is what we might call haecceity (with no particular commitment to Duns Scotus or others who have used the word): thisness: the property of being a particular thing and not something else. One of the deepest metaphysical questions we can ask is why everything is like this, instead of like something else: indeed, why is everything like anything, instead of nothing (which after all would require much less explanation)? Abstractions may sometimes be hard to get your head around, but when you come down to it there’s nothing so incomprehensible as the detailed particularity of ordinary reality.

We might look for some kind of cosmogonical explanation for why the world is the way it is. Now that we all know about chaos and have played with Conway’s Game of Life, it seems relatively plausible that quite a simple starting point for the Universe could have given rise to the buzzing complexity of the real world. To be real, then, would be to feature in the unrolling of some cosmic algorithm. But this seems unsatisfactory in more than one respect. First, the ultimate explanation is now deferred to the laws or rules which sustain the algorithm, whether they are simply the laws of physics or some unknown principles of maths, logic, or metaphysics. Apart from the fact that we now have the new and daunting problem of explaining how and why these laws work, we’re back with the old problem of putting the source of reality in a theory, the same sort of mistake we were making with qualia in the first place. Speaking for myself, I see theories and laws of nature as explaining and clarifying the innate tendencies of stuff: to say that the reality of the stuff arises out of the operation of the theory, or the laws, seems very like a category mistake.

Moreover, since it is a real thing, consciousness also has haecceity; but I find it impossible to believe that my own consciousness is merely a product of some rolling program: it may seem like a feeble recourse to intuition, but that just doesn’t seem a satisfying answer. It doesn’t explain why I am so definitely here and not there, or so definitely this person and not that one. I am not a zombie, in the philosophical sense, and my existence has a more solid character than any Platonic process. And let’s face it, it is all about me. I’m keen to understand the nature of reality in general, but like Descartes, I want my own reality to be clarified first and above all.

Perhaps metempsychosis is part of the answer. This post of Shankar’s a while back, drawing on Hindu ideas, proposed that qualia reside in a universal background entity, which gets plugged into the disparate personalities and bodies of all of us. I think some tricky stuff needs to be put in place to deal with that kind of split, but the idea of one soul, as it were, doing the work for all of us, is interesting. For one thing it clears up the tricky housekeeping side of reincarnation. Since deaths and births are not neatly paired up, reincarnation requires at least a kind of celestial waiting-room, and possibly a kind of time-travelling on the part of the transmigrating souls. But if one soul can animate an indefinite number of people simultaneously, the problem doesn’t arise. And why shouldn’t things work like that? That magic core which sustains my selfhood doesn’t seem to be dependent on the details of my body, my situation in history, or perhaps even my personality, when you get right down to it. So it doesn’t seem to have many differentiating characteristics. If it’s indistinguishable from other people’s magic core, then perhaps a rather slapdash application of Leibniz’s Law suggests it’s really the same. Perhaps in some sense all human experiences are experienced by the same essential me, either sequentially or somehow simultaneously; perhaps, in fact, I am talking to myself here? So I can conclude that it would indeed be odd and puzzling if I were an island of haecceity – because in fact I am nothing less than universal subjectivity.

Convinced? No, nor me, I’m afraid. In the end all that we’re left with, at best, is a persuasive denial of the very haecceity, so pressingly evident, which I wanted explained in the first place. I’m not convinced that my core of selfhood can be stripped of all its characteristics without losing its identity, and if it could that would identify me, not just with other people, but with everything. To some that might be a congenial conclusion, but to me again it looks as if the identity we wanted elucidated has run through our fingers. It seems more likely to me that my personality and the quirks of my physical history are constitutive of what I am even in the deepest sense: unfortunately of course, those quirky particularities are as hard to account for as my own thisness.

So it’s back to the metaphysical drawing board, I fear. Perhaps the next set of ideas about qualia will, after all, hold the vital clue?

I ain’t got no body

Brain in a jar. You probably read about the recent experiment which apparently simulated enough neurons for half a mouse brain running for the equivalent of one second of mouse time – though that required ten seconds of real time on a massively powerful supercomputer. Information is scarce: we don’t know how detailed the simulation was or how well the simulated neurons modelled the behaviour of real mouse neurons (it sounds as though the neurons operated in a void, rather than in a simulated version of the complex biological environment which real neurons inhabit, and which plays a significant part in determining their behaviour). No attempt was made to model any of the structure of a mouse brain, though it is said that some signs of patterns of firing were detected, suggesting that some nascent self-organisation might have been under way – on an optimistic interpretation.

I don’t know how this research relates to the Blue Brain project, which has just about reached what the researchers consider a reasonable neuron-by-neuron electronic simulation of one of the cortical columns in the brain of a juvenile rat. The people there emphasise that the project aims to explore the operation of neuronal structures, not to produce a working brain, still less an artifical consciousness, in a computer.

It is not altogether clear, of course, how well an isolated brain – even a whole one – would work (another unanswered question about the simulation is whether it was fed with simulated inputs or left to its own devices). A brain in a jar, dissected out of its body but still functioning, is a favourite thought-experiment of some philosophers; but others denounce the idea that consciousness is to be attributed to the brain, rather than the whole person, as the ‘mereological fallacy’. A new angle on this point of view has been provided by Rolf Pfeifer and Josh Bongard in How the body shapes the way we think. Their title is actually slightly misleading, since they are not concerned primarily with human beings, but seek instead to provide a set of design principles for better robots, drawing on the history of robotics and on their own theoretical insights. They’re not out to solve the mystery of consciousness, either, but their ideas are of some interest in that connection.

In essence, Pfeifer and Bongard suggest that many early robots relied too much on computation and not enough on bodies and limbs with the right kind of properties. They make several related points. One is the simple observation that sometimes putting springs in a robot’s legs works better than attempting to calculate the exact trajectory its feet need to follow to achieve perfect locomotion. In fact, a great deal can be accomplished merely by designing the right kind of body for your robot. Pfeifer and Bongard cite the ‘passive dynamic walkers’ which achieve a human-style bipedal gait without any sensors or computation at all (they even imply, bizarrely, that the three patron saints of Zurich might have worked on a similar principle: apparently legend relates that once their heads were cut off, the holy three walked off to the site of the Grossmünster church). Similarly, the canine robot Puppy produces a dog-like walk from a very simple input motion, so long as its feet are allowed to slip a bit. Human babies are constructed in such a way that even random neural activity is relatively likely to make their hands sweep the area in front of them, grasp an object, and bring it up towards eyes and mouth: so that this exploratory behaviour is inherently likely to arise in babies even if it is not neurally pre-programmed.

Another point is that interesting (and intelligent) behaviour emerges in response to the environment. Simple block-pushing robots, if designed a certain way, will automatically push the blocks in their environment together into groups. This behaviour, which is really just a function of the placement of sensors and the very simple program operating the robots, looks like something achieved by relatively complex computation, but really just emerges in the right environment.

Thins are looking bleak for the brain in the jar, but there is a much bolder hypothesis to come. Pfeifer and Bongard note that a robot such as Puppy may start moving in an irregular, scrambling way, but gradually falls into a regular trot: in fact, for different speeds there may be several different stable patterns: a walk, a run, a trot, a gallop. These represent attractor states of the system, and it has been shown that neural networks are capable of recognising these states. Pfeifer and Bongard suggest that the recognition of attractor states like this represents the earliest emergence of symbolic structures; that cognitive symbol manipulation arises out of recognising what your own body is doing. Here, they suggest, lies the solution to the symbol grounding problem; of intentionality itself.

If that’s true, then a brain in a jar would have serious problems: moreover, simulating a brain in isolation is very likely to be a complete waste of time because without a suitable body and an environment to move around in, symbolic cognition will just never get going.

Is it true? The authors actually offer relatively little in the way of positive evidence or argument: they tell a plausible story about how cognition might arise, but don’t provide any strong reasons to think that it is the only possible story. I’m not altogether sure that their story goes as far as they think, either. Is the ability to respond to one’s body falling into certain attractor states a sign of symbol manipulation, any more than being able to respond to a flash of light or a blow on the knee? I suspect that symbols require a higher levl of abstraction, and the real crux of the story is in how that is achieved – something which looks likely, prima facie, to be internal to the brain.

But I think if I were running a large-scale brain simulation project, these ideas might cause me some concern.

Loopy

Douglas Hofstadter. With I am a Strange Loop, Douglas Hofstadter returns – loops back? to some of the concerns he addressed in his hugely successful book Gödel, Escher, Bach, a book which engaged and inspired readers around the world. Despite the popularity of the earlier book, Hofstadter feels the essential message was sometimes lost; the new book started out as a distilled restatement of that message, shorn of playful digressions, dialogues, and other distractions. It didn’t quite turn out that way, so the new book is much more than a synopsis of the old. However, the focus remains on the mystery of the self.

Is it a mystery? In a way I wondered why Hofstadter thought there was any problem about it. Talking about myself is just a handy way of picking out a particular human animal, isn’t it? In fact it doesn’t have to be human. We might say of a dog that “He wants to get in the basket himself”; for that matter we might say of a teacup “It just fell off the shelf by itself”. Why should talk of I, myself, me, evoke any more philosophical mystery than talk of she, herself, her?

I can think of three bona fide mysteries attached to the conscious self: agency (or free will if you like); phenomenal experience (or qualia); and intentionality (or meaning). Hofstadter is unimpressed by any of these. Qualia and free will get a quick debunking in a late chapter: Hofstadter shares his old friend Dennett’s straight scepticism about qualia, and as for free will, what would it even mean? Intentionality gets a more respectful, but more glancing treatment Hofstadter proposes the analogy of the careenium, which is to be imagined as kind of vast, frictionless billiard table on which hordes of tiny magnetic balls, know as simms, roll around. Occasionally the edge of the careenium may be hit by objects in the exterior world, imparting new impulses to the simms and possibly causing them to agglomerate into big rolling masses, or simmballs (the attentive reader will notice that a strong focus on the core message has by no means excluded a fondness for artful puns and wordplay). Evidently impulses from the outside world spark off neural symbols, whose interaction gives rise to our thoughts.

It would not be fair to criticise the sketchiness of this account, because it isn’t what Hofstadter is on about: it isn’t even the main point of the careenium metaphor. He acknowledges, too, that some will think talk of symbols in the brain leads to the assumption of an homunculus to read them, and to other difficulties, and he briefly indicates a response. The point really is that for Hofstadter all this is largely beside the point.

So what is the real point, and why does Hofstadter find the self worthy of attention? For him it is all about greatness of soul; friendship and the sharing, the sympathetic entering into, of other consciousnesses. This is where the loops come in.

Hofstadter quotes several examples of self-referential loops, re-creating, for example, experiments with video feedback which he first carried out many years ago. More substantially, he gives an account of how Kurt Gödel managed to get self-reference back into logical systems like the one set out in Russell and Whitehead’s Principia Mathematica. PM, as Hofstadter calls it, was a symbolic language like a logical algebra, which the authors used to show how arithmetic could be built up out of a few simple logical operations. One of its distinguishing features was that it included special rules which were meant to exclude self-reference, because of the paradoxes which readily arise from sentences or formulae that talk about themselves (as in ‘This sentence is false.’). By arithmetising the notation and applying some clever manoevres, Gödel was able to reintroduce self-reference into the world of PM and similar systems (a disaster so far as the authors’ aspirations were concerned) and incidentally went on to prove the existence in any such system of true but unprovable assertions.

This is one of the most interesting sections of the book, though not the easiest to read. It’s certainly enlivened, however, by the grudge Hofstadter seems to have against Russell: I found myself wondering whether Grammaw Hofstadter could have been the recipient of unwelcome attentions from the aristocratic logician at some stage. I’ve read many accounts which suggest that Bertrand Russell might have been, say, a touch self-centred: but it’s a novelty to read the suggestion that he might have been a bit dumb in certain respects. In this account, Gödel is the young Turk who explodes the enlivening force of self-reference within Russell’s gloomy castle: no mention here of how Russell himself had previously done something similar to the structure erected by Frege, so that Russell also has a claim to be considered a champion of paradox.

People are clearly examples of self-referential systems, but is self-reference an essential part of consciousness or merely a natural concomitant? It is plausible that self-reference might help explain why our thoughts, for example, seem to pop out of nowhere: when we try to locate the source, we get trapped into looking down a regress like the infinite corridor which appears in the video feedback. Hofstadter’s loops, moreover, are no ordinary loops – they are strange. A strange loop violates some hierarchical order, perhaps with containers being contained, or steps from a higher level leading up to lower ones. It is our symbolic ability which gives us the ability to create strange loops: to symbolise our own symbolic system, and so on. However, we are unable to perceive the lower-level neural operations which constitute our thoughts, and we therefore suffer the impression of mysteriously efficacious, self-generating thoughts and decisions, which we cannot reconcile with the sober scientific account of the physical events which really carry the force of causality.

Hofstadter spends some time on this issue of different levels of interpretation, seeking to persuade us that the existence of complete physical or neural stories does not stop accounts of causality in terms of higher-level entities (thoughts, intentions, and so on) from being salient and reasonable. He argues, intriguingly, that there is an analogy here with what Gödel did: by reinterpreting PM at a higher level, he was able to draw conclusions and in a sense set limits to what could be done within PM. In the same way we can legitimately say that our intentions were pushing the neurons around, not just that the neurons were pushing the intentions. This idea of downward causality across levels of interpretation seems metaphysically interesting, though I think it is quite redundant: the relation between entities at different levels of explanation is identity, not causation (though I suppose you can see identity as the basic case of causation if you wish).

Now things get more difficult. Hofstadter argues that bits of our own loops get echoed in those of other people, and that therefore we exist in them as well as in our own brains. If two video cameras and screens are looping and enter each other’s view, then the loop of camera A will be running, in miniature, in that of camera B, and vice versa. Our engagement with other people thus helps to create and sustain us, and in fact it means that death itself is not as abrupt as it would otherwise be, the afterglow of our own strange loops continuing to spin in the minds of others and thereby sustaining our continued existence to some limited extent. This theorising is attached to an account of Hofstadter’s reaction to the sudden, and tragically early, death of his wife.

Hofstadter does not believe that his views about death have been fundamentally influenced by this awful experience: he says that his conclusions were largely drawn before his wife’s sudden illness. All the same, the context makes it more difficult to say that, in fact, the reasoning here is wholly unconvincing. It’s just not true that the image of a feedback loop in another feedback loop remains a viable feedback loop in itself.

Hofstadter offers some arguments designed to shake our belief in the simple equation of body with person (he is happy to use the word ‘soul’, though without its usual connotations), including the one – rather tired in my view – of the magic scanners which can either relocate or duplicate us perfectly. Where a person is duplicated like this, he says, it makes no sense to insist that one of the two bodies must be the real person. “…to believe in such an indivisible, indissoluble “I” is to believe in nonphysical dualism” . Not necessarily: it might instead be to assert the most brutal kind of physical monism: we are physical objects, and our existence depends utterly on our physical continuity.

I’m afraid I therefore end up personally having a good deal of sympathy with many of the incidental things Hofstadter says while remaining unconvinced by the points he regards as most important. Whether or not you accept his case, though, Hofstadter has a remarkable gift for engaging and lively exposition, more of which will always be welcome.

Colour blind colours

Colourblind Synaesthesia. A team of researchers (Milán, Hochel, González, Tornay, McKenney, Díaz Caviedes, Mata Martín, Rodríguez Artacho, Domínguez García and Vila) have a paper in the JCS setting out their experiments with an unusually interesting subject. This person, referred to as ‘R’, is colour blind for certain colours: but he also experiences a spectrum of synaesthetic colours in which he can make the full range of clear distinctions. The team believes its research shows that qualia can be accesible and useful to science.

Synaesthesia, the occurence of sensations (often sensations of colour) in association with unrelated stimuli, is an interesting but treacherous subject. Many people feel that the days of the week, or the numbers from one to ten, seem to have their own appropriate colours, and sceptics may feel that the whole phenomenon of synaesthesia is largely a case of people allowing their poetic feelings to run away with them. But there is a good deal of evidence to show that vivid synaesthesia is a real phenomenon. One of the earliest and most striking of reported cases is the classic one recounted by Luria in The Mind of a Mnemonist. His subject (referred to as ‘S’) formed exceptionally strong associations, which had the beneift of giving him a ‘photographic’ memory, but was also a considerable handicap as unwanted associations intruded into his life, making it impossible, for example, to buy an ice cream when something in the vendor’s voice called up a vivid impression of red hot coals.

R’s synaesthesia is not on this troublesome scale: he experiences colour sensations in connection with pictures and people, but he experiences the colour as internal, rather than seeing it ‘out there’ as some synaesthetes do. For him, there is a kind of code, with each synaesthetic colour having certain emotional connotations: in the case of people, for example, red means attractive, green means ill or dirty, purple means upbeat, yellow means envious or aggressive, and brown means old or uninteresting (the reader may be able to guess at R’s own approximate age). A portion of the team’s research was devoted to exploring these emotional connotations, and they claim, unconvincingly I think, that their results show the possibility of an inverted spectrum of emotions (I think they merely show that R’s emotional reactions are in some respects atypical: the sky for him evokes red, which makes it exciting, whereas it is more commonly seen as restful: but that doesn’t amount to an inverted emotional spectrum).

The exploration of synaesthesia in colour blind subjects is not new – Ramachandran and Hubbard have produced many interesting findings, including a subject who referred to colours he could experience synaesthetically, but never with his eyes, as ‘Martian’ colours. This naturally raises the question of whether the Martian colours were ones which people with normal vision are used to, or something genuinely stranger. How would we know?

The exploration of qualia recounted in the JCS piece rests on an ingenious use of the Stroop effect. This effect occurs where, for example, the subject is shown a series of colour words, and asked to say, not what colour they name, but what the colour of the font is. If you’ve tried this, you’ll know that it is more difficult to do quickly and accurately than you might suppose: much harder, in any case, than when font and word indicate the same colour. Now if you were completely colour blind, you would of course be immune to Stroop interference. The team devised experiments which showed that R did indeed fail to show Stroop effects in certain cases where other subjects suffered them, very plausibly because of his limited colour vision. However, when he was shown pictures which evoked dissonant synaesthetic colours, Stroop interference did occur.

This seems to show that R’s inner colour experience – his qualia? – cover the normal range, distinct from the range of colours he can actually distinguish with his eyes. Moreover, in a separate series of experiments, the team got him to match his synaesthetic colours with real ones, confirming that in his case at least they were not Martian in character. Does all this show that qualia are ‘a useful scientific concept’?

Interestingly, the paper quotes (and slightly misdescribes) one of the ‘intuition pumps’ used by Daniel Dennett. Two coffee tasters (Chase and Sanborn) have ceased to enjoy Maxwell House, but seemingly for different reasons. To one, it tastes the same as ever, but he no longer likes that taste: the other still likes that taste, but the coffee, although it has not changed chemically or in any other objective respect, no longer tastes that way to him. Dennett’s case is that this distinction, between the qualia and the taster’s reaction, is not ultimately sustainable. One of the taster’s wives ultimately straightens him out on the point: so he’s still having the same taste experience, but now doesn’t like it? But doesn’t the fact that he doesn’t like it mean, in itself, that the experience is different? There’s just no point in talking about qualia as distinct from our reactive dispositions.

What would Dennett say about R? I think he would say the research is an interesting exploration of those reactive dispositions, but that nothing here requires us to talk of qualia at all. R’s brain reacts to certain stimuli in ways which other people’s brains react to certain inputs from their eyes, though R himself lacks those inputs. But just because synaesthetic colour doesn’t come from the eyes, we mustn’t conclude that it has the inner, subjective quality of qualia. Indeed, by ingeniously naturalising synaesthetic colour, and showing it to be accessible to science, the team has arguably shown that it can’t be identified with qualia.

The team concludes that their research shows the coffee taster thought experiment actually makes no sense: the quale and the reaction ‘seem to be rigidly connected and cannot change independently’. Dennett might not be unhappy with that conclusion: why not take one more step, he might say, and draw the conclusion that talk of qualia is really only talk of reactions…?

Down with Physics

Picture of Robert Lanza. Robert Lanza, well-known in the area of cloning and stem cells, has fired off a broadside in another direction. Never mind the physicists, he says, with their long-awaited Theories of Everything and their bizarre multi-dimensional intangible substrates: in fact the fundamental science is biology. Space and time are not even real except inasmuch as we perceive them; and perception is a product of consciousness, a biological mystery the physicists can never hope to penetrate.

It’s easy to sympathise with Lanza’s irritation over the triumphalism indulged in by some physicists – while physics itself seems in some ways to be getting ever more deeply into difficulty. In terms of clear progress and perhaps even methodological purity, biology seems to have a far better story to tell in recent years. But can biology really be more fundamental than physics?

Lanza’s key theme is that reality depends on perception; perception on consciousness; and consciousness on biology. With the first step, we’re back with Bishop Berkeley, whose controversial view that ‘to be is to be perceived’ Lanza almost seems to take for granted. A substantial, centuries-long debate has already taken place over this, which I cannot hope to do justice to here; but to take just one contrary argument: if all reality depends on perception, there’s an unsolved problem about how things get started at all. There I am, for the sake of argument, hanging in the metaphysical void. Nothing exists until I perceive it; but how do I start perceiving something which doesn’t exist? Even my own thoughts must be there before I can become aware of them; yet they can’t exist until I have perceived them. So I can’t even think? Lanza acknowledges that some have seen his philosophy as leading inevitably to solipsism: but it seems it might lead to utter nullity. Lanza says that the illusions suffered by schizophrenic patients are as real to them as the ordinary world is to us: but the possibility of error is not sufficient to demonstrate the impossibility of truth (though Lanza is not the first person to have given up too easily on objective reality). In places Lanza actually seems rather equivocal about his Berkeleyanism: he offers the analogy of a CD player: until it works on the relevant tracks, the music doesn’t exist: and in the same way, there’s no reality until our minds have operated on… what? It ought to be the underlying realities of physics, but they are what Lanza seems to want to deny.

Though no doubt it is true that consciousness is biological, that cannot altogether be taken for granted either. Among others Lanza cites Descartes, Kant, and Leibniz as well as Berkeley in support of the primacy of consciousness: but none of them would have accepted that it was a matter of biology. When Hume daringly had one of his characters declare that the processes of consciousness in the brain were not fundamentally different from the processes of decay in a cauliflower, he took care to distance himself from a view he knew would be regarded as an insult to the human spirit, far beyond the pale of civilised discourse. Moreover, Lanza’s own views, curiously enough, place a barrier between him and the biology he wishes to celebrate. Our knowledge of biology, after all, comes to us in much the same kind of way as our knowledge of physics; through our senses – our perceptions. If time and space, and other concepts of physics, are really illusions, then surely so are cells and organisms and brains. Lanza says experience is something generated ‘inside your head’, but what head? The only knowledge we have of heads comes to us through, guess what, experience. The truth here seems to be that biology simply cannot take the role Lanza wants to assign to it, and in seeking to ‘get below’ physics, he ends up resorting to metaphysics, the only subject (with the possible exception of maths) which really does operate at an even more fundamental level.

Lanza puts forward a couple of other arguments to support his case. He appeals (curiously enough) to physics itself in the shape of quantum theory, which he suggests has eliminated the idea of a reality independent of perception. Like Berkeleyanism, the correct interpretation of quantum physics is a large subject, but my impression is that it would be at the radical end of the spectrum to suppose that it did away with the idea of an objective, independent reality altogether.

He also mentions the argument that many features of the universe seem to have been set up with great precision to allow the eventual possibility of life. If gravity or the strong nuclear force were slightly different from what they are, the world would never have been a habitable place. I’m not exactly certain about how Lanza means this to fit withhis overall view, but it seems he must be suggesting that the world actually began, in some non-chronological sense, with human perception, which then extrapolated backwards the necessary conditions for its arising in a presumed physical world. If so, I’d like more explanation about how that would work and why it would necessarily give rise to these exquisitely precise physical constants: but the underlying anthropic argument seems weak to me.

First of all, the reasoning smacks of Warty Bliggens, the toad:

he explained that when the cosmos
was created
that toadstool was especially
planned for his personal
shelter from sun and rain
thought out and prepared
for him

It’s not surprising that the set-up of the Universe favours the existence of human beings because if it didn’t, we shouldn’t be here to worry about it (but perhaps something else would).

It may be, in fact I suspect it must be, that there are, as yet unknown to us, deep metaphysical reasons why the cosmos is the way it is and could not have been otherwise: in which case there’s no real scope for surprise about the way it turned out. But if the basic laws and constants are in some way arbitrary, as Lanza’s argument supposes, we can’t really claim to know what the range of possible universes, or the range of possible conscious entities, really is. It may be that tinkering slightly with a few of the current constants produces a world in which human beings cannot occur; but why stop there? If we vary the basic rules more fundamentally, it might well be that there are countless possible universes utterly unlike ours, containing innumerable multitudes of unimaginable thinking beings. In that case, it is again unsurprising that we should chance to occur in a possible world which happens to suit us.

So although it’s interesting to entertain the idea, I don’t think biology, for all its merits, can be enthroned as the most fundamental of sciences.

Thanks to Karen for telling me about Lanza’s theory and providing the link.

What, no kill-bots?

Kill-bot. You may have read a month or two ago about the rather scary robotic sentries which have been created: it seems they identify anything that moves and shoot it. Although it has a number of interesting implications, that technology does not seem an especially exciting piece of cybernetics. The BICA project (Biologically Inspired Cognitive Architectures) set up by DARPA (the people who, to all intents and purposes, brought us the Internet) is a very different kettle of fish.

The aim set out for the project was to achieve artificial cognition with a human level of flexibility and power, by bringing together the best of computational approaches and recent progress in neurobiology. Ultimately, of course, the application would be military, with robots and intelligent machines supporting or superseding human soldiers. In the first phase, a number of different sub-projects explored a range of angles. To my eyes there is a good deal of interesting stuff in the reports from this stage: if I had been sponsoring the project, I should have been afraid that each team would want to go on riding its own favourite hobby-horse to the exclusion of the project’s declared aims, but that does not seem to have happened.

In the second phase, the teams were to proceed to implementation, and the resulting machines were to compete against each other in a Cognitive Decathlon. Unfortunately, it seems there will be no second phase. No-one appears to know exactly why, but the project will go no further.

It could well be that the cancellation is the result of budget shifts within DARPA that have little or nothing to do with the project’s perceived worth. Another possibility is that the sponsors became uneasy with the basic idea of granting lethal hardware a mind of its own: the aim was to achieve the kind of cognition that allows the machine to cope with unexpected deviations from plan, and make sensible new decisions on the fly; but that necessarily involves the ability to go out of control spontaneously. It could also be that someone realised how difficult moving from design to implementation was going to be. It has always been easy to run up a good-looking high-level architecture for cognition, with the real problems having a tendency to get tidied up into a series of black boxes. This might have been one project where it was a mistake to start with the overall design. The plasticity of the human brain, and the existence of other brain layouts in creatures such as squid, suggest that the overall layout may not matter all that much, or at least that a range of different designs would all be perfectly viable if you could get the underlying mechanisms right.

There is another basic methodological issue here, though. When you start a project, you need to know what you’re trying to build and what it’s supposed to do: but no-one can really give a clear answer to those questions so far as human cognition is concerned. The BICA project was likened by some to the Apollo moon landings: but although the moon trips were hugely challenging, it was always clear what needed to be delivered, and in broad terms, how.

But what is human cognition actually for? We can say fairly clearly what some of the sub-systems do: analyse input from the eyes, for example, or ensure that the sentences we utter hang together properly. But high-level cognition itself?

From an evolutionary perspective, cognition clearly helps us survive: but that could equally be said of almost every organ and function of a human being, so it doesn’t help us define the distinctive function of thought. Following the line adopted by DARPA we could plausibly say that cognition frees us from the grasp of our instincts, helping us to deal much more effectively with novel situations, and exploit opportunities which would otherwise be neglected. But that doesn’t really pin it down, either: the fact that thoughtful behaviour is different from instinctive, pre-programmed behaviour doesn’t distinguish it from random behaviour or inertia, and pointing out that it’s often more successful behaviour just seems to beg the question.

It seems to be important that human-level cognition allows us to address situations which have not in fact occurred; we can identify the dangerous consequences of a possible course of action without trying it out, and enable ‘our hypotheses to die in our stead’. Perhaps we could describe cognition as another sense, the sense of the possible: our eyes allow us to consider what is around us, but our thoughts allow us to consider what would or might be. It’s surely more than that, though, since our imagination allows us to consider the impossible and the fantastic just as readily as the possible. As a definition, moreover, it’s still not much use to a designer, not least because the very concept of possibility is highly problematic.

Perhaps after all we were getting closer to the truth with the purely negative point that thoughtful behaviour is not instinctive. When evolution endowed us with high-level cognition, she took an unprecedented gamble; that cutting us loose to some degree from self-interested behaviour would in the end and overall lead to better self-interested behaviour. The gamble, so far, appears to have paid off; but just as the kill-bots could choose alternative victims, or perhaps become pacifists, human beings can (and do) kill themselves or choose not to reproduce. Perhaps the distinctive quality of cognition is it free, gratuitous character: its point is that it is pointless. That doesn’t seem to be much help to an engineer either.

Anyway, I think I can wait a bit longer for the kill-bots; but it seems a shame that the project didn’t go on a bit further, and perhaps illuminate these issues.

What is this thing called love?

Masks of Comedy and Tragedy. Edge has excerpted the first chapter of Marvin Minsky’s book on emotions, The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. I clearly need to read the whole book, but I found the excerpt characteristically thought-provoking. As you might have expected, the general approach builds on the Society of Mind: emotions, it seems, allow us to activate the right set of resources (their nature deliberately kept vague) from the diffuse cloud available to us. So emotions really serve to enhance our performance: in young or unsophisticated organisms they may be a bit rough and ready, but in adult human beings they are under a degree of rational control and are subject to a higher level of continuity and moderation.

At first sight, explaining the emotions by identifying the useful jobs they do for us seems a promising line of investigation. Since we are the products of evolution, it seems a good hypothesis to suppose that the emotional states we have developed must have some positive survival value. The evidence, moreover, seems to support the idea to some degree: anger, for example, corresponds with physiological states which help get us ready for fighting. Love between mates presumably helps to establish a secure basis for the production and care of offspring. The survival value of fear is obvious.

However, on closer examination things are not so clear as they might be. What could the survival value of grief be? It seems to be entirely negative. Its physiological manifestations range from the damaging (loss of concentration and determination) to the surreal (excessive water flowing from the tear-ducts down the face). Darwin himself apparently found the crying of tears ‘a puzzler’ with no practical advantages either to modern humans or to any imaginable ancestor, or any intelligible relationship to any productive function – what has rinsing out your eyes got to do with the death of a mate or child? If it comes to that, even anger is not an unalloyed benefit: a man in the grip of rage is not necessarily in the best state to win an argument, and it’s surely even debatable whether he’s more likely to win a fight than someone who remains rational and judicious enough to employ sensible tactics.
One of the thoughts the extract provoked in me, albeit at a tangent to what Minsky is saying, concerned another possible problem with evolutionary arguments. The physiological story about an increased pulse rate and the rest of it is one thing, but does all that have to be accompanied by feeling angry? Can’t we perhaps imagine going through all the right physiological changes to equip us for fighting, fleeing, or whatever other activity seems salient, without having any particular feelings about it?This sounds like qualia. If emotions are feelings which are detachable from physical events, are they, in themselves, qualia? I’m not quite sure what the orthodox view of this is: I’ve read discussions which take emotions to be qualia, or accompanied by them, but the canonical examples of qualia – seeing red and the rest of it – are purely sensory. The comparison, at any rate, is interesting. In the case of sensory qualia there are three elements involved: an object in the external physical world, the mechanical process of registration by the senses, and the ineffable experience of the thing in our minds, where the actual redness or sounding or smelliness occurs. In the case of the emotions, there isn’t really any external counterpart (although you may be angry or in love with someone, you perceive the anger or love as your own, not as one of the other person’s qualities): the only objective correlate of an emotional quale is our own physiological state.

Are emotional zombies really possible? It’s widely though not universally believed that we could behave exactly the way we do – perhaps be completely indistinguishable from our normal selves – and yet lack sensory qualia altogether. An emotional zombie, along similar lines, would have to be a person whose breathing and pulse quickened, whose face blushed and voice turned hoarse, and who was objectively aware of these physiological manifestations, but actually felt no emotion whatever. I think this is still conceivable, but it seems a little stranger and harder to accept than the sensory case. I think in the case of an emotional zombie I should feel inclined to hypothesise a kind of split personality, supposing that the emotions were indeed being felt somewhere orin some sense, but that there was also a kind of emotionless passenger lodged in the same brain. I don’t feel similarly tempted, with sensory zombies, to suppose that real subjective redness must be going on in a separate zone of consciousness somewhere in the mind.

The same difference in plausibility is visible from a different angle. In the case of sensory qualia, we can worry about whether our colour vision might one day be switched, so that what previously looked blue now looks yellow, and vice versa. I suppose we can entertain the idea that Smith, when he goes red in the face, shouts and bangs the table, is feeling what we would call love; but it seems more difficult to think that our own emotions could somehow be switched in a similar way. The phenomenal experience of emotions just seems to have stronger ties to the relevant behaviour than phenomenal experience of colours, say, has to the relevant sensory operations.

It might be that this has something to do with the absence of an external correlate for emotions, which leaves us feeling more certainty about them. We know our senses can mislead us about the external world, so we tend to distrust them slightly: in the case of emotions, there’s nothing external to be wrong about, and we therefore don’t see how we could really be wrong about our own emotions. Perhaps, not entirely logically, this accounts for a lesser willingness to believe in emotional zombies.

Or, just possibly, emotional qualia really are tied in some deeper way to volition. This is not so much a hypothesis as a gap where a hypothesis might be, since I should need a plausible account of volition before the idea could really take shape. But one thing in favour of this line of investigation is that it holds out some hope of explaining what the good of phenomenal experience really is, something lacking from most accounts. If we could come up with a good answer to that, our evolutionary arguments might gain real traction at last.

Too thin? Too rich?

Disappearing foot. Just before you’d read this sentence, were you consciously aware of your left foot? Eric Schwitzgebel set out to resolve the question in the latest edition of the JCS.

In normal circumstances, we are bombarded by impressions from all directions. Our senses are constantly telling us about the sights, sounds and smells around us, and also about where our feet are, how they feel in their shoes, how hungry we currently feel, whether our sore calf muscle is any better at the moment; and what spurious reasoning some piece of text on the internet is trying to spin for us. But most of the time, most of this information is ignored. In some sense, it’s always there, but only the small subset of things which are currently receiving our attention are, as it were, brightly lit.

There’s little doubt about this basic scenario. Notoriously, when we drive along a familiar route, the details drop into the background of our mind and we start to think about something else. When we arrive at our destination, we may not remember anything much about the journey: but clearly we could see the road and hear the engine at all relevant times, or we probably shouldn’t have been able to finish the journey. On the other hand, suppose as we were driving along, the sound of a baby crying had unexpectedly drifted over from the back seat: would we have failed to notice that feature of the background?

So we have two (at least two) levels of awareness going on. Schwitzgebel poses the question: which do we regard as conscious? On the “thin” view, we’re only really conscious of the things we’re explicitly thinking about. No doubt the other stuff is in some sense available to consciousness, and no doubt bits of it can pop up into consciousness when they trigger some unconscious alarm; but it’s not actually in our consciousness. How else, the thinnists might ask, are we going to make the distinction between the two different levels? The rich view is that everything should be included: I may not be thinking about my foot at all times, but to suggest that I only know where it is subconsciously, or unconsciously, seems ridiculous.

Schwitzgebel does not think either side has particularly strong arguments. Both are inclined to provide examples, or assert their case, and expect the conclusion to seem obvious. Searle has argued that we couldn’t switch attention unless we were conscious of the thing we were switching our attention to: Mack and Rock have done experiments to prove that while paying close attention to one things we may fail to notice other things: but neither of these lines of discussion really seems to provide what you call a knock-down case.

Accordingly, with many reservations, Schwitzgebel set up an experiment of his own. The subjects wore a beeper which went off at a random period up to an hour after being set: they then recorded what they were conscious of immediately beforehand (it’s important, of course, to keep the delay minimal, otherwise the issue gets entangled with problems of memory). The subjects were divided into groups and asked to focus on tactile, visual and total sensory experience.

The results supported neither the thin nor the rich position. Perhaps the most interesting finding is the degree of surprise evoked in the subjects. In a departure from normal experimental method, Schwitzgebel used as subjects philosophy postgrads who could reasonably be expected to have some established prejudices in the field: he also spent time explaining the experiment and talking over the issues, and recorded whether each subject was a thinnist or richist at the start. Although this involved some risk of skewing the results, it allowed the discovery that thinnists actually often found themselves having rich experience, and vice versa.

Where does that leave us? It seems almost as though the dilemma is merely reinforced: the research points towards some compromise, but it’s hard to see where we can find room for one. The results did seem to reinforce the existing general agreement that there really are two distinct levels or aspects of consciousness at work. Wouldn’t one solution, then, be to give both neutral labels (con-1 and con-2?) and leave it at that? That may be what one of Schwitzgebel’s subjects, who apparently dismissed the whole thing as ‘linguistic’ had in mind. But it’s not a very comfortable position to dismiss the concept of consciousness in favour of two hazy new ones. Schwitzgebel, rightly, I think, considers that the difference between thinnism and richism is real and significant.

My best guess for a neatish answer is that we’re simply dealing with pure first order consciousness and the same thing combined with second order consciousness. In other words, the dim constant awareness of everything being reported by our senses, really is conscious, but it’s a region of consciousness we’re not conscious of being conscious of. By contrast, we’re not only conscious of the things at the forefront of our minds, we’re also aware of being conscious of them. (It might well be that second-order consciousness is what animals largely or wholly lack – I wonder if thinnists also tend to be sceptics about animal consciousness?)
Alas, that’s not really a compromise: it seems to make me a kind of richist.