The Philosophy of Delirium

Is there any philosophy of delirium? I remember asserting breezily in the past that there was philosophy of everything – including the actual philosophy of everything and the philosophy of philosophy. But when asked recently, I couldn’t come up with anything specifically on delirium, which in a way is surprising, given that it is an interesting mental state.

Hume, I gather, described two diseases of philosophy, characterised by either despair or unrealistic optimism in the face of the special difficulties a philosopher faces. The negative over-reaction he characterised as melancholy, the positive as delirium, in its euphoric sense. But that is not what we are after.

Historically I think that if delirium came up in discussion at all, it was bracketed with other delusional states, hallucinations and errors. Those, of course, have an abundant literature going back many centuries. The possibility of error in our perceptions has been responsible for the persistent (but surely erroneous) view that we never perceive reality, only sense-data, or only our idea of reality, or only a cognitive model of reality. The search for certainty in the face of the constant possibility of error motivated Descartes and arguably most of epistemology.

Clinically, delirium is an organically caused state of confusion. Philosophically, I suggest we should seize on another feature, namely that it can involve derangement of both perception and cognition. Let’s use the special power of fiat used by philosophers to create new races of zombies, generate second earths, and enslave the population of China, and say that philosophical delirium is defined exactly as that particular conjunction of derangements. So we can then define three distinct kinds of mental disturbance. First, delusion, where our thinking mind is working fine but has bizarre perceptions presented to it. Second, madness, where our perceptions are fine, but our mental responses make no sense. Third, delirium, in which distorted perceptions meet with distorted cognition.

The question then is; can delirium, so defined, actually be distinguished from delusion and madness? Suppose we have a subject who persistently tries to eat their hat. One reading is that the subject perceives the Homburg as a hamburger.  The second reading is that they perceive the hat correctly, but think it is appropriate to eat hats. The delirious reading might be that they see the hat as a shoe and believe shoes are to be eaten. For any possible set of behaviour it seems different readings will achieve consistency with any of the three possible states.

That’s from a third person point of view, of course, but surely the subject knows which state applies? They can’t reliably tell us, because their utterances are open to multiple interpretations too, but inwardly they know, don’t they? Well, no. The deluded person thinks the world really is bizarre; the mad one is presumably unaware of the madness, and the delirious subject is barred from knowing the true position on both counts. Does it then, make any sense to uphold the existence of any real distinction? Might we not better say that the three possibilities are really no more than rival diagnostic strategies, which may or may not work better in different cases, but have no absolute validity?

Can we perhaps fall back on consistency? Someone with delusions may see a convincing oasis out in the desert, but if a moment later it becomes a mountain, rational faculties will allow them to notice that something is amiss, and hypothesise that their sensory inputs are unreliable. However, a subject of Cartesian calibre would have to consider the possibility that they are actually just mistaken in their beliefs about their own experiences; in fact it always seemed to be a mountain. So once again the distinctions fall away.

Delusion and madness are all very well in their way, but delirium has a unique appeal in that it could be invisible. Suppose my perceptions are all subject to a consistent but complex form of distortion; but my responses have an exquisitely apposite complementary twist, which means that the two sets of errors cancel out and my actual behaviour and everything that I say, come out pretty much like those of some tediously sane and normal character. I am as delirious as can be, but you’d never know. Would I know? My mental states are so addled and my grip on reality so contorted, it hardly seems I could know anything; but if you question me about what I’m thinking, my responses all sound perfectly fine, just like those of my twin who doesn’t have invisible delirium.

We might be tempted to say that invisible delirium is no delirium; my thoughts are determined by the functioning of my cognitive processes, and since those end up working fine, it makes no sense to believe in some inner place where things go all wrong for a while.

But what if I get super invisible delirium? In this wonderful syndrome, my inputs and outputs are mangled in complementary ways again, but by great good fortune the garbled version actually works faster and better than normal. Far from seeming confused, I now seem to understand stuff better and more deeply than before. After all, isn’t reaching this kind of state why people spend time meditating and doing drugs?

But perhaps I am falling prey to the euphoric condition diagnosed by Hume…

29 thoughts on “The Philosophy of Delirium

  1. Ah, you’ve got me to a Tee here: “Suppose my perceptions are all subject to a consistent but complex form of distortion; but my responses have an exquisitely apposite complementary twist, which means that the two sets of errors cancel out and my actual behaviour and everything that I say, come out pretty much like those of some tediously sane and normal character. I am as delirious as can be, but you’d never know. … But what if I get super invisible delirium? In this wonderful syndrome, my inputs and outputs are mangled in complementary ways again, but by great good fortune the garbled version actually works faster and better than normal. Far from seeming confused, I now seem to understand stuff better and more deeply than before.”

  2. I agree that there can certainly be errors of perception. But I find it most illogical that you contrast the views that a) we somehow perceive reality (directly) and b) we only perceive sense-data , or a few other choices. It is my understanding that “perceive” means to receive and interpret sense data. How could we ever perceive reality directly, as opposed to perceiving sense data or something else our brains do?

  3. No, I think to perceive is merely to become aware. If you bake sense-data into the definition, you’re begging the question.

    It may be that we receive sense-data as part of perception (I have some reservations about ‘data’ being a seductively misleading word in a neurological context). But we surely don’t perceive those sense-data. If perception requires intermediating sense-data, and we perceive the sense-data, we need more sense-data to intermediate that perception and so on. Whoosh, infinite regress.

  4. How does one distinguish erroneous from error-free perception? Is it possible in principle to do so without accepting that error-free perception perceives the actual world?

  5. Re #3, OK, I see where you’re coming from. But to me, the word “perceive” is transitive. Perceiving means to perceive something. And I suppose the idea of intransitive perception goes a long way toward explaining how you could imagine zombies. And what consciousness really is. To me, consciousness is directly tied to the subject matter being sensed. When nothing is being sensed, I am not conscious (asleep, dead, or whatever).

  6. If you could be “aware” without perception, what are you aware of? How do you know you are “aware” if there is no sense data?

  7. I try to let my imagination run wild, and here’s where I get to. Suppose you are an apparently comatose patient. You show no outward signs of awareness, but breathing and heartbeat give away that fact that you are alive. So what do you sense? My first thought is that there must be bodily senses. But maybe not. Maybe that is also shut off. Then what? My next thought is that the brain is notoriously inventive. If nothing is coming in, it will make up a story. And that must be the origin of visions, seeing a bright light, and so forth.

  8. OK, so I ask, could there be a moment before the visions kick in? What would be the state? What does that mean? I suppose it would be like a computer running a “DO nothing” loop. But would that be consciousness? I say NO. Consciousness is like the program having something to do. If there is nothing to do, there would be no consciousness. For me, it is the running of that program which creates consciousness. The “DO nothing” loop is synonymous with unconsciousness.

  9. But, you could say, the program IS running, even if it be in a “DO nothing” loop. And I say, Yes, The heart is also beating. but that does not mean consciousness.

  10. Michael – don’t we accept that error-free perception perceives the actual world?

    Lloyd – no, that’s not what I meant. Change my first sentence so it says: ‘to perceive X is merely to become aware of X’.

    Whereas for some reason I think you believe that ‘to perceive X is to receive sense-data about X’

  11. Peter, I am very much interested in nailing down your concept of perception, kicking the tires, as it were.

    We can start with ‘to perceive X is merely to become aware of X’. Now my personal position is that all processes, including mental and/or conscious processes, can be described by the formula Input -> agent -> Output. So putting your statement (‘to perceive X is merely to become aware of X’) into that form we get something like Input (sensory data of X) -> agent -> Output (state of being aware of X).

    Does that seem reasonable to you?

    *

  12. To #10 (and maybe #11): What would it mean to be aware of something, other than to receive sensory data about it. OK. Maybe somebody told you about it. But I would include that as having received sensory data. You could perhaps imagine it. But that would suppose that you had received sensory data some time in the past. And that raises an entirely separate issue: what is the relation between memory and consciousness?

  13. Before proceeding, chaps, can I just enter this reservation? The fact that I chose to diss the sense-data theory of perception in passing doesn’t mean I’m launching my own fully-annotated theory of perception.

    James, no, I decline the ride. I can’t see why you would adopt that arbitrary schema and your formula doesn’t represent what I said correctly. Also you’ve actually written in ‘sensory data’ which openly begs the question against me!

    Lloyd; awareness and receiving sense-data cannot be identical, because for one thing we know that sense-data can be received without generating awareness. It’s more to do with certain internal states that they may or may not give rise to.

    More generally, I’d say something like this. Awareness is a mental state with a relation to an object such that the object is mentally available to influence my thoughts and actions. I have absolutely no need to mention sense-data.

  14. OK #13. I’m puzzled about your assertion that sense-data “can be received” without generating awareness. I would agree that you could no doubt detect EEG signals from various sensory areas during some sort of comatose state, but that could hardly be called “receiving sensory data”. How do you get to a mental state in relation to an object without sensory perception of the object? OK, maybe memory. But as I said earlier, that’s yet another ballpark.

  15. On the other hand … If #13 paragr 1 is a backtrack, I’m perfectly willing to let it drop at that. In fact, I don’t understand any of this, either.

  16. I think I took too strong a tone back in #2. I would love for this to be a forum where we can explore each other’s understandings of what consciousness is all about.

  17. …that could hardly be called “receiving sensory data”

    Why not (seriously, why do you think not – I think you’re on the edge of seeing the point)? But in any case we’re certainly not aware of all the sensory data we receive, unless you’re going to just define ‘receive’ as ‘be aware of’.

    ..How do you get to a mental state… OK, maybe memory…

    Or imagination, inference, all kinds of thought. But that’s a different question. You weren’t asserting that the receipt of sensory data caused awareness, you were saying it was awareness. Have you changed your mind?

    I certainly haven’t; I can’t imagine why you think #13 might be a backtrack.

    I would love for this to be a forum where we can explore each other’s understandings…

    So what have we been doing, chopping liver already?

  18. OK, liver chopping aside: enough said, let’s get on with it.

    I propose a (visual) sensory scenario:
    Time A: a photon hits the retina. (nothing else has happened),
    Time B: A rod or cone cell fires. (sense-data reveived? Arguably, Yes; I would argue, No.) And already we’re at a point of one-to-many and many-to-one in photons vs. neural firings,
    Time C: nerve impulses cross the retina to the optic nerve. (sense-data, if “received”, still not meaningful.),
    Times I,J,K: A neuron somewhere on the top of the thalamus fires, foveal tracking is done: (Aware: maybe some blindsight cases, generally NO.),
    Times R,S,T: Late occipital lobe image analysis, color has gone a separate way, up into the parietal lobe, (aware?, still, generally, NO),
    Times W,X,Y: Now it gets interesting. Various sensory events are being integrated. A low-level kind of time adjustment has happened. Libet-style time shifts still not done. There is an interesting case involving a neural cluster called the Claustrum (means “hidden thing”). It seems to connect a lot of sensory stuff together. An early 2004 paper has a good, but old, description (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1569501/), A newer 2017 paper asks what it’s all about and recounts an episode of a patient with an electrode in the claustrum. When the probe is activated, the patient instantly becomes comatose. Upon turning off the probe, the patient resumes talking as if nothing had happened. (https://www.psychologytoday.com/blog/consciousness-self-organization-and-neuroscience/201702/what-the-heck-is-claustrum), Christof Koch gives a talk at a Feb 2017 NIH meeting. (https://videocast.nih.gov/summary.asp?live=21673&bhcp=139) At time 2:11:27, he shows a 3D slide of the projection of a single claustrum neuron. It goes, literally, everywhere. (Aware, yes, pretty sure.).
    Time Z: Assuming the event passed all kinds of hurdles, mostly attention issues, we are fairly clearly aware of it. Unless it got shoved aside anywhere down the line.

    At what point in the scenario would you put: Event occurs, Sense-data received, Awareness, Consciousness ??
    For me, awareness and consciousness are the same thing. Obviously not the same as first event. Where do you put “sense-data reception?”

  19. Peter #13, fair enough. My schema is not arbitrary, but the explanation is long, so I’ll bide my time for now and see how you answer Lloyd’s question.

    BTW, Lloyd, I think you need more time points after Z. Specifically, I think you need signals going from the neocortex going back to the thalamus and basal ganglia.

    *

  20. James: You are surely right about that. I make no claim that processing stops. I’m just trying to see when consciousness might first arise. My time “stamps” should have reflected that.

  21. Lloyd, you may not have noticed, but what I am saying is that my hypothesis is that consciousness, as most people think of it, aka the autobiographical self, happens with the signals coming from the neocortex to the thalamus, etc.

    So again, my hypothesis, lots of processing in neocortex, but what “we are aware of” is what comes back from neocortex, more or less.

    *

  22. Re James #19: Upon further reflection, I wonder just how correct you were to point that out. My goal of “first consciousness” may well not occur until the later stages of integration that could happen as a result of the additional “thalamic round trips” you describe. My first thought was that consciousness is a kind of “first impression” of the sensory input. But the integration of additional information made possible by the thalamic relay circuits could certainly help fill out the picture.

  23. Lloyd, of course there’s a long science story about vision. But that’s about the ‘how’. We’re taking about the ‘what’. What we see is what we’re looking at out there, not any of the things in your story. The doesn’t imply spooky direct knowledge, merely that we can bracket everything in your account together and call it perception. The fact that the causal story has stages doesn’t invalidate the result any more than me having to pass through Peterborough on the way means my destination isn’t Edinburgh. That’s really all I’m saying.

  24. Yes, I get all of that. But I do not agree that you can really talk about ‘what’ without taking all of the ‘how’ into account. Perhaps that’s my primary issue with philosophy, the tendency to ignore a lot of ‘how’. OK. Consciousness somehow “arises’, but not out of thin air. I am not in the slightest sense a Descartian. Consciousness is produced by our brains as we go about living. I don’t believe it’s mysterious. To the contrary, I believe it can be directly explained by examining the mechanisms. Agreed that we cannot yet do that, but that is no excuse for hand waving. We must explore and dig. I believe that we will find that the very process of computing all of the operations which we call perception will produce the sensations of consciousness in the entity. Once we understand (and recreate) those operations, consciousness will result.

  25. James #21: I don’t think I completely agree that consciousness requires neocortical actions. I do not go as far as Peter Martin’s hope of explaining the consciousness of a thermostat, but I do believe that any entity with enough processing to examine and respond to the environment … will experience some form of consciousness. I had to put the dots in there because I do not know how to state exactly what it is I believe. As I first said it, I realized that it would apply to the thermostat. I think there has to be some capability to map the environment. Many have argued that there must also be some kind of representation of a “self”. I tend to think that a “self” arises directly from the interaction. But there can be no doubt that our extensive knowledge of self is a major factor in the kind of consciousness that we, as humans, experience.

  26. James, I have no doubt that the thalamo-cortical loops contribute to the rich form of consciousness that we humans experience. Let me say that another way. I believe there are many levels of consciousness. That our human variety includes a bunch of stuff relating to thought and all the other stuff our brains do. But at a lower level, I believe simpler forms of consciousness exist. For example, an ant, having been over a trail, has some sort of memory of that trail. That memory must be available to the ant in some form of a “view” of the trail. What that view looks like to the ant, we can only speculate. But I have no doubt that the ant “sees” something.

  27. Lloyd, I absolutely agree with you about the nature of consciousness, and I think I can fill in the gaps a bit. What’s missing in your ellipsis (…) is a role for symbolic semantic information. (That will take a long explanation, as I noted above)

    So I also agree that ants have a degree of consciousness. But I also think consciousness can be modular (not sure “modular” is the right word). What I mean by that is simpler conscious agents can combine to form more complex conscious agents. Thus, an ant colony has a more complex consciousness.

    When I was discussing the neocortical/thalamus interface I was referencing what Damasio calls the autobiographical self which is presumably the largest and most complex agent in the brain. He also recognizes a protoself (identified with the brain stem, I think), as well as a core self. It is this autobiographical “self” that most people are referring to when they talk about their consciousness. My hypothesis is that the physical mechanism of this autobiographical self is performed by certain sub cortical elements (thalamus, claustrophobic, basal ganglia, etc.). The umwelt of this autobiographical self is mostly the neocortex.

    Now again, I think you are correct about determining the mechanisms of this consciousness, but this will not get you past your disconnect with Peter, because he is looking from the functional perspective. The functional perspective is completely disconnected from the physical description. As Putnam pointed out, functionalism is not incompatible with dualism. So for the experience of seeing a red ball, you can describe all of the physical things that happen, but somewhere in that description there will be a squirt of neurotransmitter that the autobiographical self sees as “red ball”. From the functional perspective the meaning of that squirt simply is “red ball”. I’m hypothesizing that that squirt is the one that comes from a cortical column in the neocortex and lands on the thalamus.

    *

  28. I always thought Damasio was one of the better writers on the subjects of who and what we are. But it has been several years since I read anything of his. My views have changed somewhat since.

    I pretty much disagree about the ant colony because the only available memory would be that of the individuals. I cannot see those individual memories as being able to support any sort of a “global” consciousness.

  29. Back in #18, time K, I said foveal tracking is done. That was wrong. Foveal tracking is an ongoing process of building a picture in a memory. The particular kind of memory needed for that fades very quickly, but lasts long enough for the fovea to be moved around as directed by attention mechanisms. As quickly as one image is used, another forms. Later processes, at least in the occipital and almost certainly elsewhere (yes, thalamo-cortical), build up the image as seen by the organism.

Leave a Reply

Your email address will not be published. Required fields are marked *