Insect thoughts

Insects are conscious: in fact they were the first conscious entities. At least, Barron and Klein think so.  The gist of the argument, which draws on the theories of Bjorn Merker is based on the idea that subjective consciousness arises from certain brain systems that create a model of the organism in the world. The authors suggest that the key part of the invertebrate brain for these purposes is the midbrain; insects do not, in fact, have a direct structural analogue,, but the authors argue that they have others that evidently generate the same kind of unified model; it should therefore be presumed that they have consciousness.

Of course, it’s usually the cortex that gets credit for the ‘higher’ forms of cognition, and it does seem to be responsible for a lot of the fancier stuff. Barron and Klein however, argue that damage to the midbrain tends to be fatal to consciousness, while damage to the cortex can leave it impaired in content but essentially intact. They propose that the midbrain integrates two different sets of inputs; external sensory ones make their way down via the colliculus while internal messages about the state of the organism come up via the hypothalamus; nuclei in the middle bring them together in a model of the world around the organism which guides its behaviour. It’s that centralised model that produces subjective consciousness. Organisms that respond directly to stimuli in a decentralised way may still produce complex behaviour but they lack consciousness, as do those that centralise the processing but lack the required model.

Traditionally it has often been assumed that the insect nervous system is decentralised; but Barron and Klein say this view is outdated and they present evidence that although the structures are different, the central complex of the insect system integrates external and internal data, forming a model which is used to control behaviour in very much the same kind of process seen in vertebrates. This seems convincing enough to me; interestingly the recruitment of insects means that the nature of the argument changes into something more abstract and functional.

Does it work, though? Why would a model with this kind of functional property give rise to consciousness – and what kind of consciousness are we talking about? The authors make it clear that they are not concerned with reflective consciousness or any variety of higher-order consciousness, where we know that we know and are aware of our awareness. They say what they’re after is basic subjective consciousness and they speak of there being ‘something it is like’, the phrase used by Nagel which has come to define qualia, the subjective items of experience. However, Barron and Klein cannot be describing qualia-style consciousness. To see why, consider two of the thought-experiments defining qualia. Chalmers’s zombie twin is physically exactly like Chalmers, yet lacks qualia. Mary the colour scientist knows all the science about colour vision there could ever be, but she doesn’t know qualia. It follows rather strongly that no anatomical evidence can ever show whether or not any creature has qualia. If possession of a human brain doesn’t clinch the case for the zombie, broadly similar structures in other organisms can hardly do so; if science doesn’t tell Mary about qualia it can’t tell us either.

It seems possible that Barron and Klein are actually hunting a non-qualic kind of subjective consciousness, which would be a perfectly respectable project; but the fact that their consciousness arises out of a model which helps determine behaviour suggests to me that they are really in pursuit of what Ned Block characterised as access consciousness; the sort that actually gets decisions made rather than the sort that gives rise to ineffable feels.

It does make sense that a model might be essential to that; by setting up a model the brain has sort of created a world of its own, which sounds sort of like what consciousness does.
Is it enough though? Suppose we talk about robots for a moment; if we had a machine that created a basic model of the world and used it to govern its progress through the world, would we say it was conscious? I rather doubt it; such robots are not unknown and sometimes they are relatively simple. It might do no more than scan the position of some blocks and calculate a path between them; perhaps we should call that rudimentary consciousness, but it doesn’t seem persuasive.

Briefly, I suspect there is a missing ingredient. It may well be true that a unified model of the world is necessary for consciousness, but I doubt that it’s sufficient. My guess is that one or both of the following is also necessary: first, the right kind of complexity in the processing of the model; second, the right kind of relations between the model and the world – in particular, I’d suggest there has to be intentionality. Barron and Klein might contend that the kind of model they have in mind delivers that, or that another system can do so, but I think there are some important further things to be clarified before I welcome insects into the family of the conscious.

32 thoughts on “Insect thoughts

  1. Thanks Peter – you say:

    “It may well be true that a unified model of the world is necessary for consciousness, but I doubt that it’s sufficient. My guess is that one or both of the following is also necessary: first, the right kind of complexity in the processing of the model…”

    Agreed. The question is what sorts of processing would entail the existence of qualitative states (qualia) for the system? The explanation has to connect the features of qualia (e.g., their non-decomposability, basicness, privacy, correspondence to regularities in the body and environment) to the functional requirements and recursive limitations of representation carried out in maintaining the system’s world model. If such an explanation is developed, then we’ll be able to safely assume that any system instantiating the necessary representational architecture is conscious. Perhaps insects will fill the bill, depending on their neural configuration.

    “…second, the right kind of relations between the model and the world – in particular, I’d suggest there has to be intentionality.”

    I’m not sure about this, since although its undoubtedly the case the world model has to be about the world (including the body) in enough detail to be behaviorally effective, we know from dreaming that there need be no online, moment-to moment transaction between the model and the world for conscious states to exist. They exist strictly as a function of the model’s functional-representational architecture being activated, both when the system is off-line (asleep and dreaming) and when it’s online (interacting in real time with the world itself).

  2. “Organisms that respond directly to stimuli in a decentralised way may still produce complex behaviour but they lack consciousness, as do those that centralise the processing but lack the required model.”

    https://youtu.be/OyrqGK6_PXo

  3. I’m not sure, Peter, that insects can be looked at as somehow lacking intentionality.
    There is a growing trend to attribute intentionality to animals. Searle, Varela and others have considered that option. Biological intentionality, evolutionary intentionality and bio-intentionality have already been part of various approachs and publications (more on this at http://philpapers.org/rec/MENBAM-2)

  4. If subjectivity is necessary for what we call “consciousness”, then an insect must have the particular kind of brain mechanism that gives subjectivity before we can say it is conscious. As far as I know, there is only one kind of such a mechanism, and there is no evidence that insects have it. See “A foundation for the scientific study of consciousness” on my Research Gate page.

  5. An interesting post, Peter. Have you considered the ‘waggle dance’ of bees? This informs other worker bees in the hive about the direction and distance of a source of food. Doesn’t the dance have semantic content, or intentionality?

    The processes of gathering the spatial information whilst foraging, and then communicating that information to others upon return, suggests that there is ‘something it is like’ to be a bee involved in this communication.

  6. Perhaps it is the usage of intentionality that needs more consideration.
    The concept of intentionality was born at a time where mental states and consciousness were mostly human performances (Brentano 1838-1917; Darwin 1809-1982).
    The ‘aboutness’ of mental states was a concept invented for human minds. And the two main philosophical schools (Phenomenology and Analytical Philosophy) have not really considered evolutionary theory as an interesting tool (more on that in Cunningham, S. (1996) ‘Philosophy and the Darwinian Legacy’).
    Apart for some exceptions (like Searle), many philosophers kept on with implicitly disregarding the possibility for animal intentionality.
    But the good news is that things are changing (see Asma S. T. (2014) ‘Teleology Rises from the Grave: Biological Intentionality’).
    My take is that intentionality (closely related to meaning) is to be used for any agent, even for artificial ones where the intentionality/aboutness/meaning is derived from the designer. Such approach also brings in criteria for distinguishing different types of agents.

  7. How about slime mold consciousness? ->

    A single-celled organism capable of learning

    https://www.sciencedaily.com/releases/2016/04/160427081533.htm

    “For the first time, scientists have demonstrated that an organism devoid of a nervous system is capable of learning. Biologists have succeeded in showing that a single-celled organism, the protist, is capable of a type of learning called habituation. This discovery throws light on the origins of learning ability during evolution, even before the appearance of a nervous system and brain. It may also raise questions as to the learning capacities of other extremely simple organisms such as viruses and bacteria”

  8. A larva of the moth _Sabulodes aegrotata_ in captivity, just after a molt. The larva’s thorax bears three pairs of true legs with tiny claws at the ends. Normally, these legs are used, along with the prolegs extending from the abdomen, to grasp the host plant that the larva lives on (in both senses: diet and physical support). After a larva molts, it usually eats its shed exoskeleton, except for the head capsule, which comes off separately. This larva has just used its first pair of true legs to pull off its old head capsule, which it will discard.

    http://s641.photobucket.com/user/OhLook/media/HeadMolt113c_zpscvjcu49b.jpg.html

    The behavior seems pretty conscious to me!

  9. Complex behavior without subjectivity is not conscious behavior. Consider the reflexive program in action when a spider builds a web. Machines can be designed to do extremely complex actions, but we would not consider them conscious entities.

  10. Peter, how would a non-qualic kind of consciousness qualify as being subjective? It seems to me that “Access” consciousness without it’s “P” counterpart amounts to cognition, rather than consciousness e.g. predictive processing theories (Friston, Hinton many others) do not claim to be theories of consciousness.
    Conversely, the stipulated connection between complex (dynamics) and consciousness can (and I believe should) be questioned, and indeed Integrated Information Theory (Tononi; in it’s latest dynamics incorporating version) is being criticized for positing unsupported claims regarding phenomenality.
    Merker, in fact, believes content impoverished consciousness can manifest in subjects with severely restricted world modeling capacities, e.g. decorticate mammals.

    Yet even if one accepts that the notion of representational complexity is pertinent to the subject, the notion of centralized processing should be scrutinized:
    Merker proposed the SC, a brainstem structure, as a candidate for the integrative seat, or “locus”, of consciousness (and I believe Damasio expressed similar if not equivalent views, and so have many others). Yet what could matter more is the overall dynamical repertoire of the “network” as it’s activity is expressed over time.
    After all a human mid-brain is larger a whole insect, and why should it matter whether the distance between units firing simultaneously is 200 microns or 5 cm?
    It may be the case that the richness of expressive patterns, and measures of coherence (relative to particular loci?), is what we should be assessing, rather than connectivity or localisation per se.

  11. Non-qualia but subjective? It’s a very good question, Ron, and it’s true that access consciousness is by definition meant to be the other kind, so I have to agree that something looks a bit wrong.
    I do think you could be a qualia sceptic and still believe in some form of subjectivity; but that’s not what is going on here. It could be that I haven’t understood the conceptual set-up properly but to be honest I think it’s more that Barron and Klein were fishing for one sort of consciousness but hooked a different one.

  12. In the retinoid theory of consciousness qualia are what perception provides, and subjectivity/consciousness is a necessary precondition for perception. So it does seem that subjectivity can exist without qualia.

  13. 12. Arnold Trehub says:

    Complex behavior without subjectivity is not conscious behavior. Consider the reflexive program in action when a spider builds a web. Machines can be designed to do extremely complex actions, but we would not consider them conscious entities.

    Well, when the “behaving” entity is a machine, we know the answer: it’s not conscious; credit for its apparent intelligence must go to its designer. For arthropods, the judgment is harder to make. They do many things that may or may not require subjectivity. Web building in spiders is partly programmed in by the history of the particular species, but the spider has to choose a location where, for instance, branches are a suitable distance apart. Jumping spiders (Salticidae) stalk and hunt prey. They have excellent vision, which would be useless if they were unaware of objects in their visual field. I don’t assume lack of subjectivity as the default.

  14. Cognicious: “They have excellent vision, which would be useless if they were unaware of objects in their visual field.”

    Detection and recognition of objects can be useful but it does not imply awareness of objects. Machine translation of text involves such recognition but certainly does not suggest that the machine is aware of what it is “seeing”.

  15. Dr. Trehub, what sort of evidence would you require as suggesting (never mind proving, for the moment) that animals that differ greatly from us are aware? Machines are automatons, but they aren’t necessarily the best models for understanding animals, as animals aren’t machines and didn’t come about in the way machines did. I’m inclined to think that consciousness is likely to have evolved more than once, as vision and flight did, because it’s so useful to survival.

  16. Conicious,

    This will depend on your working definition of awareness/consciousness. In retinoid theory the foundation of what it is like to be conscious is like being at the perspectival origin of a volumetric surround. This defines subjectivity and requires particular kinds of neuronal mechanisms in the brain. So, in addition to complex behavior, I would look for evidence that a creature has a brain sufficiently large and complex to accommodate such mechanisms. Incidentally, it is my guess that all mammals are conscious, as well as birds and some Cephalopoda.

  17. In retinoid theory the foundation of what it is like to be conscious is like being at the perspectival origin of a volumetric surround. This defines subjectivity and requires particular kinds of neuronal mechanisms in the brain.

    Retinoid theory is very vision-based. For humans, of course, vision is the dominant sense, at least when dealing with things in the world beyond one’s own body. (For one’s body, tactile and kinesthetic stimuli run the show.) I want to raise the possibility that animals with “alien” lifestyles and sensory apparatus are aware, but their inner lives differ from ours. They wouldn’t have to be great seers to earn admittance to the consciousness elite. Isn’t that approximately what Nagel said about bats?

  18. Cognitious,

    Retinoid theory works for all sensory systems. It is not vision- based, but it is based on the assumption that any conscious creature must have an internal representation of the space in which it lives from its own unique perspectival origin. Any kind of sensory input can provide subjective content to this fundamental spatiotemporal plenum.

  19. I agree that yet again there is confusion between phenomena and representation – ie integration of information ’causes’ consciousness, clearly not a realistic use of the word ’cause’. It’s not impossible that it’s a parallel process (at some level) but it’s pretty speculative let’s face it. But at least they are looking at insects and not delving straight into speculation about human brains. Insects are good topics of study too – you can chop up them up without worrying too much (that’s because it’s spectacularly unlikely they have much in the way of consciousness ..)

    If two systems are identical and one causes consciousness – and one doesn’t – then they are not identical are they ? Zombies don’t exist. They don’t exist in real life and they don’t exist on paper – given the paper variety must adhere to the norms of scientific sense. In the world of thought experiments the clue is in the word ‘identical’. Assuming that consciousness has a material cause, then the same physical causes produce the same consequences. Ergo, two physically identical persons will both have the same mental faculties. And the same height. And the same teeth.

    All the time that has been wasted on these zombies .. its zomboid

    J

  20. John:

    “If two systems are identical and one causes consciousness – and one doesn’t – then they are not identical are they ? Zombies don’t exist.”

    The zombie thought experiment is meant to show that we can conceive of experience being absent in the presence of its physical correlates and associated functions. We can conceive of this only because we don’t yet have a good theory of why the existence of consciousness is entailed by those functions. Once the theory is in hand, we’ll see that perfect duplicates of conscious creatures can’t be zombies: they will be conscious as a matter of some sort of psycho-physical, functional-representational necessity.

    “integration of information ’causes’ consciousness, clearly not a realistic use of the word ’cause’.”

    Agreed. Consciousness can’t be caused in any standard sense, for instance as a physical effect of a system carrying out certain operations such as information integration. Otherwise it would be detectable as that effect, which it isn’t. The entailment from the system’s operations to it being conscious has to be non-causal, I’d suggest, see “Options for explaining consciousness” at http://www.naturalism.org/philosophy/consciousness/the-appearance-of-reality#toc-2-options-for-explaining-consciousness-T24LKMq-

  21. Tom

    Just because we can conceive of zombies doesn’t mean to say that we should, or that it has one ounce of application. Thought experiments have to be tied – however flimsily – to reality. And it is not conceivable – on that basis – that two identical systems could produce consciousness on one case and not the other. That’s because there is nothing in this world that would suggest that two identical systems have different properties. In reality it’s tautological – the very meaning of describing two real-world independent systems as identical means there is no way of distinguishing them other than by, for instance, geographic location.

    So unless we are to conclude that consciousness appears as if by magic, two identical systems real world systems cannot be conceived of as having different properties, including the consciousness they generate.

    There is, as far as I’m concerned, not the slightest problem in saying that matter causes consciousness. There is no ambiguity about it : it’s obviously true : there is nothing ‘un-normal’ about it because there is nothing ‘normal’ about the universe in the first place.

    The only perspective that has a problem with it are those dogmatized by physics. Physics is a man-made practice : it is not the truth. It can’t predict consciousness. So what ? The only alternative is to think that thermometers have emotions ? That we should have symposiums about zombies? Give me a break.

    Jbd

  22. John Davey, #24: Thought experiments have to be tied – however flimsily – to reality.

    The zombie thought experiment nevertheless has value, just as a reductio ad absurdum does. Something useful can be inferred from the observation that the result of this experiment, for everyone who thinks clearly, is a failure to replicate.

  23. Richard

    Just because most brain activity – as measured volumetrically – isn’t conscious -doesn’t make it ‘zomboid’.

    Human beings are 60% water. Does that make them puddles ? Does it mean that the other 40% isn’t doing something interesting ? That the water in the 60% isn’t related to all the other chemicals in the 40% ?

    Jbd

  24. Cognitions

    The zombie argument is partly ‘prior’ reductio ad absurdam – and partly physics propaganda.

    The idea that two identical systems could result in different outcomes is obviously a false basis upon which to premiss an experiment. Absurd, you might say.

    The claim that you can’t say a person’s consciousness can be assumed from the presence of a brain is pure physics propaganda. Of course you can assume that something possessing a brain can be conscious. Of course the presence of a brain is enough. Just because physics can’t leap from syntax to semantic doesn’t mean we can’t say the emperor is naked.

    J

  25. John, re. com.27 The majority of people on this site,as with many others, are what Peter Hacker would refer to as’brainsians’. Having just returned from the TSC 2016 Tucson, I would say there were a lot of people there who also think it is all about the brain as a computer. Added to this,various ethereal beliefs and different versions of what consciousness is, then you have impossible discussions with incoherent comments, as yours and others often show.

  26. Richard

    I’m sorry if you don’t understand the comment.

    A zombie is 100% unconscious (on paper, at least). It’s the 100% that matters. Just because a certain proportion of thinking in humans is unconscious does not characterise it as ‘zomboid’. It’s like describing somebody as 30% pregnant, or 10% dead.

    There are 000s of people in Tucson who believe that God made the earth in 7 days. I’m not interested in the weight of opinion, rather in rational sense. And computationalism is irrational non-sense, garnered by a mass of public money and ‘expertes’ into an unstoppable force, like phrenology or homeopathy. Propaganda defines the debate – which in a sense I find more interesting than the debate itself. But it’s destructive.

    J

  27. I would say that subjective P-consciosness is simply awareness of a creature’s immediate environment and the ability to navigate that environment and interact with it in a purposeful and flexible way. That means amoebas are conscious and that subjective consciousness was hijacked later by brainy animals which then injected higher level thoughts into those already existing subjective worlds.

    see: http://philpapers.org/rec/SLESA

Leave a Reply

Your email address will not be published. Required fields are marked *