Here’s a thought…

Where do thoughts come from? Alva Noë provides a nice commentary here on an interesting paper by Melissa Ellamil et al. The paper reports on research into the origin of spontaneous thoughts.

The research used subjects trained in Mahasi Vipassana mindfulness techniques. They were asked to report the occurrence of thoughts during sessions when they were either left alone or provided with verbal stimuli. As well as reporting the occurrence of a thought, they were asked to categorise it as image, narrative, emotion or bodily sensation (seems a little restrictive to me – I can imagine having two at once or a thought that doesn’t fit any of the categories). At the same time brain activity was measured by fMRI scan.

Overall the study found many regions implicated in the generation of spontaneous thought; the researchers point to the hippocampus as a region of particular interest, but there were plenty of other areas involved. A common view is that when our attention is not actively engaged with tasks or challenges in the external world the brain operates the Default Mode Network (DMN); a set of neuronal areas which appear to produce detached thought (we touched on this a while ago); the new research complicates this picture somewhat or at least suggests that the DMN is not the unique source of spontaneous thoughts. Even when we’re disengaged from real events we may be engaged with the outside world via memory or in other ways.

Noë’s short commentary rightly points to the problem involved in using specially trained subjects. Normal subjects find it difficult to report their thoughts accurately; the Vipassana techniques provide practice in being aware of what’s going on in the mind, and this is meant to enhance the accuracy of the results. However, as Noë says, there’s no objective way to be sure that these reports are really more accurate. The trained subjects feel more confidence in their reports, but there’s no way to confirm that the confidence is justified. In fact we could go further and suggest that the special training they have undertaken may even make their experience particularly unrepresentative of most minds; it might be systematically changing their experience. These problems echo the methodological ones faced by early psychologists such as Wundt and Titchener with trained subjects. I suppose Ellamil et al might retort that mindfulness is unlikely to have changed the fundamental neural architecture of the brain and that their choice of subject most likely just provided greater consistency.

Where do ‘spontaneous’ thoughts come from? First we should be clear what we mean by a spontaneous thought. There are several kinds of thought we would probably want to exclude. Sometimes our thoughts are consciously directed; if for example we have set ourselves to solve a problem we may choose to follow a particular strategy or procedure. There are lots of different ways to do this, which I won’t attempt to explore in detail: we might hold different aspects of the problem in mind in sequence; if we’re making a plan we might work through imagined events; or we might even follow a formal procedure of some kind. We could argue that even in these cases what we usually control is the focus of attention, rather than the actual generation of thoughts, but it seems clear enough that this kind of thinking is not ‘spontaneous’ in the expected sense. It is interesting to note in passing that this ability to control our own thoughts implies an ability to divide our minds into controller and executor, or at least to quickly alternate those roles.

Also to be excluded are thoughts provoked directly by outside events. A match is struck in a dark theatre; everyone’s eyes saccade involuntarily to the point of light. Less automatically a whole variety of events can take hold of our attention and send our thoughts in a new direction. As well as purely external events, the sources in such cases might include interventions from non-mental parts of our own bodies; a pain in the foot, an empty stomach.

Third, we should exclude thoughts that are part of a coherent ongoing chain of conscious cogitation. These ‘normal’ thoughts are not being directed like our problem-solving efforts, but they follow a thread of relevance; by some connection one follows on from the next.

What we’re after then is thoughts that appear unbidden, unprompted, and with no perceivable connection with the thoughts that recently preceded them. Where do they come from? It could be that mere random neuronal noise sometimes generates new thoughts, but it seems unlikely to be a major contributor to me: such thoughts would be likely to resemble random nonsense and most of our spontaneous thought seem to make a little more sense than that.

We noticed above that when directing our thoughts we seem to be able to split ourselves into controller and controlled. As well as passing control up to a super-controller we sometimes pass it down, for example to the part of our mind that gets on with the details of driving along a route while the surface of our mind us engaged with other things. Clearly some part of our mind goes on thinking about which turnings to take; is it possible that one or more parts of our mind similarly goes on thinking about other topics but then at some trigger moment inserts a significant thought back into the main conscious stream? A ‘silent’ thinking part of us like this might be a permanent feature, a regular sub- or unconscious mind; or it might be that we occasionally drop threads of thought that descend out of the light of attention for a while but continue unheard before popping back up and terminating. We might perhaps have several such threads ruminating away in the background; ordinary conscious thought often seems rather multi-threaded. Perhaps we keep dreaming while awake and just don’t know it?

There’s a basic problem here in that our knowledge of these processes, and hence all our reports, rely on memory. We cannot report instantaneously; if we think a thought was spontaneous it’s because we don’t remember any relevant antecedents; but how can we exclude the possibility that we merely forgot them? I think this problem radically undermines our certainty about spontaneous thoughts. Things get worse when we remember the possibility that instead of two separate thought processes, we have one that alternates roles. Maybe when driving we do give conscious attention to all our decisions; but our mind switches back and forth between that and other matters that are more memorable; after the journey we find we have instantly forgotten all the boring stuff about navigating the route and are surprised that we seem to have done it thoughtlessly. Why should it not be the same with other thoughts? Perhaps we have a nagging worry about X which we keep spending a few moments’ thought on between episodes of more structured and memorable thought about something else; then everything but our final alarming conclusion about X gets forgotten and the conclusion seems to have popped out of nowhere.

We can’t, in short, be sure that we ever have any spontaneous thoughts: moreover, we can’t be sure that there are any subconscious thoughts. We can never tell the difference, from the inside, between a thought presented by our subconscious, and one we worked up entirely in intermittent and instantly-forgotten conscious mode. Perhaps whole areas of our thought never get connected to memory at all.

That does suggest that using fMRI was a good idea; if the problem is insoluble in first-person terms maybe we have to address it on a third-person basis. It’s likely that we might pick up some neuronal indications of switching if thought really alternated the way I’ve suggested. Likely but not guaranteed; after all a novel manages to switch back and forth between topics and points of view without moving to different pages. One thing is definitely clear; when Noë pointed out that this is more difficult than it may appear he was absolutely right.

Hard Problem not Easy

boy blueAntti Revonsuo has a two-headed paper in the latest JCS; at least it seems two-headed to me – he argues for two conclusions that seem to be only loosely related; both are to do with the Hard Problem, the question of how to explain the subjective aspect of experience.

The first is a view about possible solutions to the Hard Problem, and how it is situated strategically. Revonsuo concludes, basically, that the problem really is hard, which obviously comes as no great surprise in itself. His case is that the question of consciousness is properly a question for cognitive neuroscience, and that equally cognitive neuroscience has already committed itself to owning the problem: but at present no path from neural mechanisms up to conscious experience seems at all viable. A good deal of work has been done on the neural correlates of consciousness, but even if they could be fully straightened out it remains largely unclear how they are to furnish any kind of explanation of subjective experience.

The gist of that is probably right, but some of the details seem open to challenge. It’s not at all clear to me that consciousness is owned by cognitive neuroscience; rather, the usual view is that it’s an intensely inter-disciplinary problem; indeed, that may well be part of the reason it’s so duffucult to get anywhere. Second, it’s not at all that clear how strongly committed cognitive neuroscience is to the Hard Problem. Consciousness, fair enough; consciousness is indeed irretrievably one of the areas addressed by cognitive neuroscience. But consciousness is a many-splendoured thing, and I think cognitive neuroscientists still have the option of ignoring or being sceptical about some of the fancier varieties, especially certain conceptions of the phenomenal experience which is the subject of the Hard Problem. It seems reasonable enough that you might study consciousness in the Easy Problem sense – the state of being conscious rather than unconscious, we might say – without being committed to a belief in ineffable qualia – let alone to providing a neurological explanation of them.

The second conclusion is about extended consciousness; theories that suggest conscious states are not simply states of the brain, but are partly made up of elements beyond our skull and our skin. These theories too, it seems, are not going to give us a quick answer in Revonsuo’s opinion – or perhaps any answer. Revonsuo invokes the counter example of dreams. During dreams, we appear to be having conscious experiences; yet the difference between a dream state and an unconscious state may be confined to the brain; in every other respect our physical situation may be identical. This looks like strong evidence that consciousness is attributable to brain states alone.

Once, Revonsuo acknowledges, it was possible to doubt whether dreams were really experiences; it could have been that they were false memories generated only at the moment of awakening; but he holds that research over recent years has eliminated this possibility, establishing that dreams happen over time, more or less as they seem to.

The use of dreams in this context is not a new tactic, and Revonsuo quotes Alva Noë’s counter-argument, which consists of three claims intended to undermine the relevance of dreams; first, dream experiences are less rich and stable than normal conscious experiences; second, dream seeing is not real seeing; and third, all dream experiences depend on prior real experiences. Revonsuo more or less gives a flat denial of the first, suggesting that the evidence is thin to non-existent:  Noë just hasn’t cited enough evidence. He thinks the second counter-argument just presupposes that experiences without external content are not real experiences, which is question-begging. Just because I’m seeing a dreamed object, does that mean I’m not really seeing? On the third point he has two counter arguments. Even if all dreams recall earlier waking experiences, they are still live experiences in themselves; they’re not just empty recall – but in any case, that isn’t true; people who are congenitally paraplegic have dreams of walking, for example.

I think Revonsuo is basically right, but I’m not sure he has absolutely vanquished the extended mind. For his dream argument to be a real clincher, the brain state of dreaming of seeing a sheep and the brain state of actually seeing a sheep have to be completely identical, or rather, potentially identical. This is quite a strong claim to make, and whatever the state of the academic evidence, I’m not sure how well it stands up to introspective examination. We know that we often take dreams to be real when we are having them, and in fact do not always or even generally realise that a dream is a dream: but looking back on it, isn’t there a difference of quality between dream states and waking states? I’m strongly tempted to think that while seeing a sheep is just seeing a sheep, the corresponding dream is about seeing a sheep, a little like seeing a film, one level higher in abstraction. But perhaps that’s just my dreams?

Headless Consciousness

Picture: Alva Noë. There have been a number of attempts recently to externalise consciousness, or at least extend it beyond the skull. In Out of Our Heads, Alva Noë launches a very broad-based attack on the idea that it’s all about the brain, drawing in a wide range of interesting research – mostly relatively well-known stuff, but expounded in a style that is clear and very readable. Unfortunately I don’t find the arguments at all convincing; I’m not unsympathetic to extended-mind ideas, but Noë’s clear and thorough treatment tended if anything to remind me of reasons why the assumption that consciousness happens in the brain looks so attractive.

I’m happy to go along with Noë on some points: in his first chapter he  launches a bit of a side-swipe at scanning technology, fMRI and PET, pugnaciously asking whether it is ‘the new phrenology?’ and deriding its limitations: this seems a useful corrective to me. But in chapter two we are brought up short by the assertion that bacteria are agents.  They have interests, and pursue them, says Noë; they’re not just bags of chemicals responding to the presence of sugar.  Within their limits we ought to accord them some sort of agency.

To me, proper agency requires an awareness of what acts one is performing and the idea that bacteria could have it at any level seems absurd. How did we get here? Noë’s case seems to be that the problem of other minds is effectively insoluble on rational empirical grounds; we can never have really solid reasons for believing anyone else, or any other entity, is conscious; yet we find ourselves unable to entertain seriously the idea that our fellow-humans might be zombies. This, he thinks, is because we have a kind of built-in engagement, almost a kind of moral commitment. He wants to extend this to life fairly widely, and of course if brainless bacteria can have agency, it tends to show that the brain is a bit over-rated.  I think he’s unnecessarily pessimistic about the evidence for other conscious minds; as a matter of fact books like his are pretty spectacular evidence; how and why would human beings produce such volumes, examining the inner workings of consciousness in minute detail, if they didn’t have it? But bacteria have yet to produce such evidence in their own favour.

Noë rests a fair amount of weight on experiments which show the remarkable plasticity of the brain: notably he quotes experiments on ferrets by Mriganka Sur. New-born ferrets had their brains rewired in such a way that the eyes fed into auditory, rather than visual cortex; yet they grew up able to use their eyes perfectly well.  This shows, Noë suggests, that no particular part of the brain is required for vision. That might be so, but that in itself does not show that no brain at all will do the job, and obviously it won’t: if the ferrets’ optical nerves had been linked with their teeth, or left dangling unattached, they would surely have been unambiguously blind. The belief that consciousness is sustained by the brain does not commit us to the view that only one specific set of neurons can do the job.  Noë explains, quoting an experiment with a rubber hand, how our sense of our selves and where we are can be moved around in a remarkably vivid way. For him, this shows that where the brain is doesn’t matter; but for others it seems equally likely to suggest that what the brain thinks is crucial and where our real hand is doesn’t matter at all.

Noë wants to claim that the frequently quoted thought-experiment of a brain in a vat, extracted from the body but still living and thinking, is impossible in principle (we know it’s impossible in practice, at least currently). He suggests that even if we did manage to sustain a brain artificially, the supporting vat, providing it with oxygenated blood and all the other complex kinds of support it would need, would actually amount to a new body. This nifty bit of redefinition is meant to show that the idea of a brain without a body will not fly. But the real point here is surely missed.  OK, let’s accept that the vat is a new body: that still means we can swap bodies while maintaining an individual consciousness. But if we keep the body and swap brains… it just seems impossible to believe that the consciousness wouldn’t go with the brain.

This perception seems to me to be the unshiftable bedrock of the discussion. Noë expounds effectively the case for regarding tools and even parts of the environment as parts of our mental apparatus; and he brings in Putnam’s argument that ‘meaning isn’t in the head’. But these arguments only serve to expand our conception of the brain-based mind, not undermine it.

My sympathy for Noë’s case returned to some degree when he discussed language. He notes that for Chomsky and others language seemed a miraculous accomplishment because they misconstrued it as an exercise in the formal decoding of a vast array of symbols. In fact, language is an activity rather than a purely intellectual exercise, and develops in a context, pragmatically.  I’d go along with that to a great extent, but while Noë sees it as proving that our decoding brain isn’t the crux of the matter after all, it seems to me it proves that decoding isn’t really what our brains are doing when they process (a word Noë would object to) language.

I sympathised even more with Noë when he attacked the idea that reality is, essentially, an illusion. If this were the case, the brain would be the all-powerful arbiter of reality (although it might seem that if the world is an illusion, the brain must be one too, and we should be dealing with a mind whose actual nature need not be pinkish biological glop). But he seemed to be back on weak ground when he concluded by taking on dreams. Dreams, after all, seem like the perfect evidence that the brain can produce conscious experience without calling on the senses or the body.  Noë argues that dreams are more limited than we think, that not all waking experiences can be reproduced in dreams, which are always shifting and inconstant. This might be true, but so what? If the brain can produce conscious experiences on its own – any conscious experiences – that seems to show that, with all caveats duly entered, the brain is still where it really happens.

It’s a well-written book, and for someone new to consciousness it would provide many excellent short sketches of thought-provoking experiments and arguments. But I’m staying in my head.