Picture: Kernel Architecture.

Picture: Bitbucket. I’ve been meaning to mention the paper in Issue 7 of the current volume of the JCS which had about the most exciting abstract that I remember seeing. The paper is “Why Axiomatic Models of Being Conscious?” . Abstracts of academic papers don’t tend to be all that thrillingly written, but this one, by Igor Aleksander and Helen Morton, proposes to break consciousness down into five components and offers a ‘kernel architecture’ to put them back together again. It offers ‘an appropriate way of doing science on a first-person phenomenon.

The tone was quite matter-of-fact, of course, but merely analysing consciousness to this extent seems remarkable to me. The mere phrase ‘axioms of consciousness’ has the same kind of exotic ring as ‘the philosopher’s stone’ in my mind: never mind proposing an overall architecture. Is this a breakthrough at last? Even if it’s not right, it must be interesting.

Picture: Blandula. Yes, my eyebrows practically flew off the top of my forehead at some of the implied claims in that abstract. But not altogether in a good way. Perhaps we really are in the same realm of myth and confusion as the philosopher’s stone. Any excitement was undercut by the certainty that the paper would prove to have missed the point or otherwise failed to deliver; we’ve been here so many times before. Moreover, you know, even those axioms are not exactly a new discovery – Aleksander, with a different collaborator, first floated them in 2003.

Picture: Bitbucket. OK, so maybe I missed them at the time, but surely we ought to look at them with an open mind rather than assuming failure is a certainty? The five are stated in the first person, in accordance with the introspective basis of the paper.

  1. Presence: I feel that I am an entity in the world that is outside of me
  2. Imagination: I can recall previous sensory experience as a more or less degraded version of that experience. Driven by language, I can imagine experiences I never had.
  3. Attention: I am selectively conscious of the world outside of me and can select sensory events I wish to imagine.
  4. Volition: I can imagine the results of taking actions and select an action I wish to take.
  5. Emotion: I evaluate events and the expected results of actions according to criteria usually called emotions.

That seems an interesting and pretty comprehensive list, though clearly it’s impossible to adopt a bold approach like this without raising a lot of different issues.

Picture: Blandula. You can say that again. Just look at number 5, for example. Apparently emotions are criteria? My criteria were so strong I just burst into tears? The sight of my true love’s face filled my heart with an inexpressibly deep criterion? And their role is to help evaluate events and the result of actions? I mean, apart from the fact that emotions generally interfere with our evaluation of events and actions, that just isn’t the essence of emotion at all. I can sit here listening to Bach and be swept along by a whole range of profound emotions which I can hardly even put a name to – I feel exalted, energised, but that doesn’t really cover it – without any connection to events or possible actions whatever.

If that isn’t enough, Aleksander and Morton claim to be developing an introspective analysis, rather than a functional one. But emotions, it turns out, are basically there to condition our actions. So that’s an introspective definition?

Picture: Bitbucket. Not the point. We’re not trying to capture the ineffable innerness of things here, we’re trying to set up a scientific framework; and from the vantage point of that framework – guess what? It turns out the ineffable innerness is entirely negligible and adds nothing to our scientific understanding.

I said in the first place there are two parts to the enterprise: the analysis and then the construction of the architecture. The analysis starts from an introspective view, but it would be absurd to think an architecture could have no regard to function. If that pollutes the non-functionalist purity of the approach from a philosophical point of view, who cares? We’re not interested in which scholastic labels to apply to the theory, we’re interested in whether it’s true.

Picture: Blandula. Well, you say that, but Aleksander and Morton seem to want to draw some philosophical conclusions – and so do you. They reckon their analysis shows that there is no real ‘hard problem’ of consciousness. I’m not convinced. For one thing, their attack seems to be quite tightly tied to the Chalmersian version of the hard problem. If subjective states can’t be rigorously related to physical states, they say, it opens the way to zombies, people who are physically like us but have no inner life. That would be Chalmers’ view, maybe, and I grant that it has attained something close to canonical status. But it’s quite possible to disbelieve in the possibility of zombies and still find the relation between the physical and the subjective profoundly problematic.

Much more fundamental than that, though, their whole approach seems to me yet another instance of a phenomenon we might call ‘scientist’s slide’. Virtually all the terms in this field can be given two values. We can talk about a robot’s ‘actions’, meaning just the way it moves, or we can talk about ‘actions’, meaning the freely-willed deliberate behaviour of a conscious agent. We can talk about our PC ‘thinking’ in an inoffensive sense that just means computational processing, or we can talk about ‘thinking’ in the subjective, conscious sense that no-one would attribute to an ordinary computer on their desk. To put it more technically, the same words in ordinary language can often be used to refer either to access consciousness, a-consciousness, the ‘easy problem’ kind, or to phenomenal consciousness, p-consciousness, the ‘hard problem kind’.

Now over and over again scientists in this area have fallen into the trap of announcing that their theory explains ‘real’ p-consciousness, but sliding into explaining a-consciousness without even noticing. Not that the explanation of a-consciousness is trivial or uninteresting: but it ain’t the hard problem. I suspect it happens in part because people with a strong scientific background have to overcome a lot of ingrained empiricist mental habits just to grasp the idea of p-consciousness at all: they can do it, but when they get involved in developing a theory and their attention is divided, it just slips away from them again. Not meaning to be rude about scientists: it’s the same with philosophers when they just about manage to get their heads round quantum physics, but in discussion it transmutes into magic pixie dust.

Picture: Bitbucket. No, no: you’re mistaking a deliberate assertion for a confusion. It’s not that Aleksander and Morton don’t grasp the concept of p-consciousness: they understand it perfectly, but they challenge its utility.

Let me challenge you on this. Take a case where we can’t shelter behind philosophical obfuscation, where we have to make a practical decision. Suppose we talk about the morality of killing or hurting animals. Now I believe you would say that we should not harm animals because they feel pain in somewhat the same way we do. But how do we know? Philosophically you can’t have any certainty that other humans feel pain, let alone animals.

In practice, I submit, you base your decision on knowledge of the nervous system of the creature in question. We know human beings have brains and nerves just like ours, so we attribute similar feelings to them. Mammals and other creatures with large brains are also assumed to have at least some feelings. By the time we get down to ants, we don’t really care: ants are capable of very complex behaviour, maybe, but they don’t have very big brains, so we don’t worry about their feelings. Plants are living things, but have no nervous system at all, and so we care no more about them than about inanimate objects.

Now I don’t think you can deny any of that – so if we stick to practical reasoning, what are we going to look at when we decide the criteria for ‘proper’ consciousness? It has to be the architecture of the brain and its processes. That’s where the paper is aiming: rational criteria for deciding such issues as whether an animal is conscious or whether it has higher order thought. If we can get this tied down properly to the relevant neurology, it may, you know, be possible to bypass the philosophy altogether.

8 Comments

  1. 1. Luigi Semenzato says:

    Howdy, I have couple of specific comments and some general comments.

    “1. Presence: I feel that I am an entity in the world that is outside of me.”

    What does this mean? “Feel”, “entity”, “world outside of me” are extremely vague terms. Not that I disagree—in fact I share these feelings, but that’s just about all I can say. The sentence has at most poetic value, no scientific value. (My high-school literature professor used to say that “poetry is truth”, and I won’t disagree—but it’s still no scientific truth).

    “2. Imagination: I can recall previous sensory experience as a more or less degraded version of that experience. Driven by language, I can imagine experiences I never had.”

    Why “driven by language”? I can imagine experiences I never had without any recourse to language. For instance, I can imagine doing a backward loop off a wave on a windsurf board by putting together the feelings of a simple jump (which I have done) and a looping rollercoaster. Unless “language” here means something else.

    Now for the general comments.

    I have been intrigued by this stuff for a long time (and who would not be?) but never looked at any “studies” on the topic, if you exclude high-school classes of history of philosophy (from Liceo Classico in Italy). I was pleased to find this site fairly quickly. The author/maintainer of this site is doing a great job. But I am *amazed* at the amount of nonsense in the field, even by authors at respectable institutions.

    Take Searle’s Chinese translator paradox. As Pauli used to say: “This is not right. This is not even wrong!” What is exactly the point? There is no paradox about a guy who translates without understanding what he is doing. In fact, why limit him to the translation task? With a few more rules, he could simulate a brain (unless there is something non-physical about brains, which remains to be proven and seems unlikely). If the simulation is good enough, the brain would appear to be slow but conscious—but clearly it would not be the same consciousness as that of the guy who is carrying on the simulation.

    In fact, it’s even worse than that. We can talk to each other about consciousness as much as we want, and this will lead most of us to assume that we all have one. But the quality of the feeling of one’s “self” is something that cannot be shared. I can only know that I have this feeling. I can’t say this about anybody else. As far as I know, the rest of the world could be just elaborate machinery.

    There is a long way to go and no good foundation yet (that I see). But of course it’s important to go on. We need to understand the mechanisms that produce these feelings and thoughts in order to improve the quality of our lives and dealing with poorly understood moral issues. The understanding will help free ourselves of religions and other superstitions and help cope with mortality and other existential issues. So I thank those who work on this subject and wish them luck.

    Luigi

  2. 2. Jack Josephy says:

    Hi I’m a final year philosophy and cognitive science student. I just started a blog on philosophy of mind and cognitive science. I’d be up for exchanging ideas and I’m trying to get as many readers as possible to my blog.

    [Thanks, Jack: I’ll have a look at that – Peter]

  3. 3. Addofio says:

    A small comment on Luigi’s comment:

    Axioms by their very nature are vague and rely on intuitive understanding. One can always point to specific terms in any axiom in any axiomatic system and note that the meaning of that term is not clearly defined. I offer as paradigmatic Euclid’s system–which starts with undefined terms and axioms based on those undefined terms. If one does not already have an intuitive sense of what a “line” is, or what “straight” means, it would be hard to gain any traction from those axioms. But we do already have a strong intuitive sense f what we mean by those terms, and the axioms consequently fueled a great deal of good mathematics. Which, over a very long haul culturally speaking, eventually resulted in much clearer axioms and ways of talking about concepts such as “straight”.

    So how should a set of axioms be evaluated? Smarter minds than mine have tackled this question for mathematical systems, but I submit the following criteria: a good set of axioms for a system will induce a strong sense in most people that they do understand what the axiom is saying and agree with it, even if they can’t define it very clearly; the set of axioms will seem to cover all the bases (which I’m not sure this one does); and that a coherent, self-consistent system of thought can be built up from the set of axioms. Eventually the system of thought may progress to the point that the axioms themselves can be improved, or lead us into entirely new realms of thought in which one or more of the asioms no longer applies, but such an outcome would be far down the road.

    Since the set of axioms offered by Aleksander and Morton are intended as a basis for a scientific, and therefore empirical, system, rather than a mathematical system, presumably other criteria would also apply. But my basic point is that the vaguness of terms such as “feel”, “entity”, “world outside me” does not invalidate the axioms, because any set of axioms will contain vague (ill-defined, undefined/undefinable) terms.

  4. 4. Luigi Semenzato says:

    Addofio, thanks for pointing out that axioms in a mathematical system also contain undefinable terms. But it seems to me the axioms of euclidean geometry have qualities (simplicity, elegance, a high level of agreement on their intuitive meaning) that is missing here. You claim these axioms are a basis for a scientific/empirical system, rather than mathematical. What does mean? Are there other examples (successful, preferably) of such axioms?

    Thanks
    Luigi

  5. 5. Addofio says:

    When I made the point about axioms for a scientific system, I wasn’t making any positive assertion–I was inserting a waffle/disclaimer based on my relative ignorance :-) . That is, I don’t know what role axioms would play for a scientific theory–I don’t know enough science, nor philosophy of science. However, I could venture the following:

    In a mathematical system, a set of axioms, undefined terms, etc. are used as the starting point for a logical system, and that system is explored deductively, with the ultimate criterion of acceptability being finding no logical contradictions.

    In a scientific system, one would be exploring implications not only by working out the logic of the system, but also by generating testable hypotheses and testing them. Two fundamental criteria must be met as the system develops: logical consistency, and empirical compatibility with reality, with, arguably, the latter being the more significant.

    In both, the generation of interesting results may initially be the important thing, especially at first, not the perfection of the starting point of axioms and terms. For instance, calculus went along just fine for rather a long time on a somewhat foggy basis, only going back and cleaning it up and making basic ideas more precise and well-defined once it was well-established as useful. If people had dismissed it at the beginning because “infinitesimal” was ill-defined–which it was–we would have lost one of the most important mathematical developments of the modern world.

    So, for A & M’s set of axioms, perhaps the best question is, does it get us anywhere? Does it give us any new insights or ideas, generate any testable hypotheses? I certainly don’t know–seems a bit early to say.

    Which is not to say that the axioms should be exempt from criticism–just that if everyone waits for the perfect set before beginning to explore the implications of a system, we’ll never get started. And I agree that their axioms can use some work just on the face of it to achieve better agreement on intuitive meaning. If I were creating a comparable set of axioms, I’d probably use language closer to everyday language and less “scientific” in order to draw more on our intuition–but then, I’m sure no respectable journal would publish that!

  6. 6. Christophe Menant says:

    How many axioms for consciousness?

    Consciousness is a complex subject and many types of consciousness can be proposed (1) (2). An axiom being a statement that cannot be deduced from other statements, an axiomatic approach to consciousness is to identify the foundational types of consciousness.
    I. Aleksander and H. Morton have proposed five axioms of consciousness. We would like to address their status as axioms.
    First axiom: “I feel that I am an entity in the world that is outside of me”. This sentence introduces phenomenal consciousness (PC) and self-consciousness (SC). PC by the “I feel” that presents the subjective component of felt experience. SC by the subject considering herself as an entity existing in the outside world, which means possessing some concept of the self. To complement this SC coverage, the following could be added: “the ability to use this concept in thinking about one self” (1). This last point brings to complement the first axiom into “I feel that I am an entity in the world that is outside of me, and that I can think about that entity”. PC and SC cannot be explained by something else. The first axiom is a true axiom.
    Second axiom: “I can recall previous sensory experience as a more or less degraded version of that experience. Driven by language, I can imagine experiences I never had”. The sentence covers three subjects: conscious recall of passed experience, conscious imagination of new experiences, and language. The two first items address the action of conscious control on memory and simulation which are well understood data processing functions. Conscious control is a performance made possible by consciousness. It can be deduced from the first axiom. Regarding language, it is indeed a key performance of humans. Some philosophers have conditioned the existence of consciousness to the possession of a language. The validity of this position has not been demonstrated. We feel that it can be quite natural to consider that language came up during evolution in synergy with consciousness (3). So with the hypothesis of language being a natural product of evolution, the second axiom can be deduced from the first one.
    Third axiom: ”I am selectively conscious of the world outside of me and can select sensory events I wish to imagine”. This axiom is about selective conscious action and imagination. These points have been covered by conscious control and imagination when looking at the second axiom. Like the second axiom, the third one can be deduced from the first one.
    Fourth axiom: “I can imagine the results of taking actions and select an action I wish to take”. This axiom is about the action of conscious control on an optimization function (selection of the best action among the ones imagined). Conscious control has already been deduced from the first axiom and optimization is a data processing function. This axiom also can be deduced from the first one.
    Fifth axiom: “I evaluate events and the expected results of actions according to criteria usually called emotions”. Evaluation of events or of expected results of actions is about observation, imagination and interpretation. These functions are independent of consciousness and PC + SC introduced by axiom 1 can make them conscious if needed. Emotion belongs to living organisms. Human emotions have some specificities coming from consciousness, like the feeling of being in an emotional state. But the root of emotions is in living organisms, not in humans. So the fifth axiom can also be deduced from the first one.
    Where does this leaves us?
    The five axioms do not all look as true axioms. The first axiom is a true axiom as it introduces SC and PC. The other axioms use SC and PC with several data processing functions. With the hypothesis of language being a natural product of evolution, the other axioms can be deduced from the first one. They do not look as true axioms. We want to consider that the first axiom complemented into “I feel that I am an entity in the world that is outside of me, and that I can think about that entity” is enough to cover the four other axioms.
    This brings us back to the two types of consciousness that are at the core of the problem of consciousness: phenomenal consciousness (the hard problem) and self-consciousness (which is not an easy problem).
    We feel that these two types of consciousness are to be addressed through the evolutionary process that created them (3).

    (1) http://cogprints.org/231/
    (2) http://cogprints.org/3808/
    (3) http://cogprints.org/4957/

  7. 7. Arcos Plage says:

    Does Pandeism solve the hard problem of consciousness? See Intriguing Metaphysical Parallels between the Consciousness Debate and Pandeism for a discussion.

  8. 8. Peter says:

    Thanks for that, Arcos – sorry your comment was entangled in my anti-virus software for a few days.

Leave a Reply