I’ve been meaning to mention the paper in Issue 7 of the current volume of the JCS which had about the most exciting abstract that I remember seeing. The paper is “Why Axiomatic Models of Being Conscious?” . Abstracts of academic papers don’t tend to be all that thrillingly written, but this one, by Igor Aleksander and Helen Morton, proposes to break consciousness down into five components and offers a ‘kernel architecture’ to put them back together again. It offers ‘an appropriate way of doing science on a first-person phenomenon.
The tone was quite matter-of-fact, of course, but merely analysing consciousness to this extent seems remarkable to me. The mere phrase ‘axioms of consciousness’ has the same kind of exotic ring as ‘the philosopher’s stone’ in my mind: never mind proposing an overall architecture. Is this a breakthrough at last? Even if it’s not right, it must be interesting.
Yes, my eyebrows practically flew off the top of my forehead at some of the implied claims in that abstract. But not altogether in a good way. Perhaps we really are in the same realm of myth and confusion as the philosopher’s stone. Any excitement was undercut by the certainty that the paper would prove to have missed the point or otherwise failed to deliver; we’ve been here so many times before. Moreover, you know, even those axioms are not exactly a new discovery – Aleksander, with a different collaborator, first floated them in 2003.
OK, so maybe I missed them at the time, but surely we ought to look at them with an open mind rather than assuming failure is a certainty? The five are stated in the first person, in accordance with the introspective basis of the paper.
- Presence: I feel that I am an entity in the world that is outside of me
- Imagination: I can recall previous sensory experience as a more or less degraded version of that experience. Driven by language, I can imagine experiences I never had.
- Attention: I am selectively conscious of the world outside of me and can select sensory events I wish to imagine.
- Volition: I can imagine the results of taking actions and select an action I wish to take.
- Emotion: I evaluate events and the expected results of actions according to criteria usually called emotions.
That seems an interesting and pretty comprehensive list, though clearly it’s impossible to adopt a bold approach like this without raising a lot of different issues.
You can say that again. Just look at number 5, for example. Apparently emotions are criteria? My criteria were so strong I just burst into tears? The sight of my true love’s face filled my heart with an inexpressibly deep criterion? And their role is to help evaluate events and the result of actions? I mean, apart from the fact that emotions generally interfere with our evaluation of events and actions, that just isn’t the essence of emotion at all. I can sit here listening to Bach and be swept along by a whole range of profound emotions which I can hardly even put a name to – I feel exalted, energised, but that doesn’t really cover it – without any connection to events or possible actions whatever.
If that isn’t enough, Aleksander and Morton claim to be developing an introspective analysis, rather than a functional one. But emotions, it turns out, are basically there to condition our actions. So that’s an introspective definition?
Not the point. We’re not trying to capture the ineffable innerness of things here, we’re trying to set up a scientific framework; and from the vantage point of that framework – guess what? It turns out the ineffable innerness is entirely negligible and adds nothing to our scientific understanding.
I said in the first place there are two parts to the enterprise: the analysis and then the construction of the architecture. The analysis starts from an introspective view, but it would be absurd to think an architecture could have no regard to function. If that pollutes the non-functionalist purity of the approach from a philosophical point of view, who cares? We’re not interested in which scholastic labels to apply to the theory, we’re interested in whether it’s true.
Well, you say that, but Aleksander and Morton seem to want to draw some philosophical conclusions – and so do you. They reckon their analysis shows that there is no real ‘hard problem’ of consciousness. I’m not convinced. For one thing, their attack seems to be quite tightly tied to the Chalmersian version of the hard problem. If subjective states can’t be rigorously related to physical states, they say, it opens the way to zombies, people who are physically like us but have no inner life. That would be Chalmers’ view, maybe, and I grant that it has attained something close to canonical status. But it’s quite possible to disbelieve in the possibility of zombies and still find the relation between the physical and the subjective profoundly problematic.
Much more fundamental than that, though, their whole approach seems to me yet another instance of a phenomenon we might call ‘scientist’s slide’. Virtually all the terms in this field can be given two values. We can talk about a robot’s ‘actions’, meaning just the way it moves, or we can talk about ‘actions’, meaning the freely-willed deliberate behaviour of a conscious agent. We can talk about our PC ‘thinking’ in an inoffensive sense that just means computational processing, or we can talk about ‘thinking’ in the subjective, conscious sense that no-one would attribute to an ordinary computer on their desk. To put it more technically, the same words in ordinary language can often be used to refer either to access consciousness, a-consciousness, the ‘easy problem’ kind, or to phenomenal consciousness, p-consciousness, the ‘hard problem kind’.
Now over and over again scientists in this area have fallen into the trap of announcing that their theory explains ‘real’ p-consciousness, but sliding into explaining a-consciousness without even noticing. Not that the explanation of a-consciousness is trivial or uninteresting: but it ain’t the hard problem. I suspect it happens in part because people with a strong scientific background have to overcome a lot of ingrained empiricist mental habits just to grasp the idea of p-consciousness at all: they can do it, but when they get involved in developing a theory and their attention is divided, it just slips away from them again. Not meaning to be rude about scientists: it’s the same with philosophers when they just about manage to get their heads round quantum physics, but in discussion it transmutes into magic pixie dust.
No, no: you’re mistaking a deliberate assertion for a confusion. It’s not that Aleksander and Morton don’t grasp the concept of p-consciousness: they understand it perfectly, but they challenge its utility.
Let me challenge you on this. Take a case where we can’t shelter behind philosophical obfuscation, where we have to make a practical decision. Suppose we talk about the morality of killing or hurting animals. Now I believe you would say that we should not harm animals because they feel pain in somewhat the same way we do. But how do we know? Philosophically you can’t have any certainty that other humans feel pain, let alone animals.
In practice, I submit, you base your decision on knowledge of the nervous system of the creature in question. We know human beings have brains and nerves just like ours, so we attribute similar feelings to them. Mammals and other creatures with large brains are also assumed to have at least some feelings. By the time we get down to ants, we don’t really care: ants are capable of very complex behaviour, maybe, but they don’t have very big brains, so we don’t worry about their feelings. Plants are living things, but have no nervous system at all, and so we care no more about them than about inanimate objects.
Now I don’t think you can deny any of that – so if we stick to practical reasoning, what are we going to look at when we decide the criteria for ‘proper’ consciousness? It has to be the architecture of the brain and its processes. That’s where the paper is aiming: rational criteria for deciding such issues as whether an animal is conscious or whether it has higher order thought. If we can get this tied down properly to the relevant neurology, it may, you know, be possible to bypass the philosophy altogether.