Ned Block has produced a meaty discussion for  The Encyclopedia of Cognitive Science on Philosophical Issues About Consciousness.  

There are special difficulties about writing an encyclopedia about these topics because of the lack of consensus. There is substantial disagreement, not only about the answers, but about what the questions are, and even about how to frame and approach the subject of consciousness at all.  It is still possible to soldier on responsibly, like the heroic Stanford Encyclopedia of Philosophy, doing your level best to be comprehensive and balanced. Authors may find themselves describing and critiquing many complex points of view that neither they nor the reader can take seriously for a moment; sometimes possible points of view (relying on fine and esoteric distinctions of a subtlety difficult even for professionals to grasp), that in point of fact no-one, living or dead, has ever espoused. This can get tedious. The other approach, in my mind, is epitomised by the Oxford Companion to the Mind, edited by Richard Gregory, whose policy seemed to be to gather as much interesting stuff as possible and worry about how it hung together later, if at all.  If you tried to use the resulting volume as a work of reference you would usually come up with nothing or with a quirky, stimulating take instead of the mainstream summary you really wanted; however, it was a cracking read, full of fascinating passages and endlessly browsable.

Luckily for us, Block’s piece seems to lean towards the second approach; he is mainly telling us what he thinks is true, rather than recounting everything anyone has said, or might have said. You might think, therefore, that he would start off with the useful and much-quoted distinction he himself introduced into the subject: between phenomenal, or p-consciousness, and access, or a-consciousness. Here instead he proposes two basic forms of consciousness: phenomenality and reflexivity. Phenomenality, the feel or subjective aspect of consciousness, is evidently fundamental; reflexivity is reflection on phenomenal experience. While the first seems to be possible without the second – we can have subjective experience without thinking about it, as we might suppose dogs or other animals do – reflexivity seems on this account to require phenomenality.  It doesn’t seem that we could have a conscious creature with no sensory apparatus, that simply sits quietly and – what? Invents set theory, perhaps, or metaphysics (why not?).

Anyway, the Hard Problem according to Block is how to explain a conscious state (especially phenomenality) in terms of neurology. In fact, he says, no-one has offered even a highly speculative answer, and there is some reason to think no satisfactory answer can be given.  He thinks there are broadly four naturalistic ways you can go: eliminativism; philosophical reductionism (or deflationism); phenomenal realism (or inflationism); or  dualistic naturalism.  The third option is the one Block favours. 

He describes inflationism as the belief that consciousness cannot be philosophically reduced. So while a deflationist expects to reduce consciousness to a redundant term with no distinct and useful meaning, an inflationist thinks the concept can’t be done away with. However, an inflationist may well believe that scientific reduction of consciousness is possible. So, for example, science has reduced heat to molecular kinetic energy; but this is an empirical matter; the concept of heat is not abolished. (I’m a bit uncomfortable with this example but you see what he’s getting at). Inflationists might also, like McGinn, think that although empirical reduction is possible, it’s beyond our mental capacities; or they might think it’s altogether impossible, like Searle (is that right or does he think we just haven’t got the reduction yet?).

Block mentions some leading deflationist views such as higher-order theories and representationism, but inflationists will think that all such theories leave out the thing itself, actual phenomenal experience. How would an empirical reduction help? So what if experience Q is neural state X? We’re not looking for an explanation of that identity – there are no explanations of identities – but rather an explanation of how something like Q could be something like X, an explanation that removes the sense of puzzlement. And there, we’re back at square one; nobody has any idea.

 So what do we do? Block thinks there is a way forward if we distinguish carefully between a property and the concept of a property. Different concepts can identify the same property, and this provides a neat analysis of the classic thought experiment of Mary the colour scientist. Mary knows everything science could ever tell her about colour; when she sees red for the first time does she know a new fact – what red is like? No; on this analysis she gains a new concept of a property she was already familiar with through other, scientific concepts. Thus we can exchange a dualism of properties for a dualism of concepts. That may be less troubling – a proliferation of concepts doesn’t seem so problematic – but I’m not sure it’s altogether trouble-free; for one thing it requires phenomenal concepts which seem themselves to need some demystifying explanation. In general though, I like what I take to be Block’s overall outlook; that reductions can be too greedy and that the world actually retains a certain unavoidable conceptual, perhaps ontological, complexity.
Moving off on a different tack, he notes recent successes in identifying neural correlates of experience. There is a problem, however; while we can say that a certain experience corresponds with a certain pattern of neuronal activity, that pattern (so far as we can tell) can recur without the conscious experience. What’s the missing ingredient? As a matter of fact I think it could be almost anything, given the limited knowledge we have of neurological detail: however, Block sees two families of possible explanation. Maybe it’s something like intensity or synchrony; or maybe it’s access (aha!); the way the activity is connected up with other bits of brain that do memory or decision-making; let’s say with the global mental workspace, without necessarily committing to that being a distinct thing.
But these types of explanation embody different theoretical approaches; physicalism and functionalism respectively. The danger is that these may be theories of different kinds of consciousness. Physicalism may be after phenomenal consciousness, the inward experience, whereas functionalism has access consciousness, the sort that is about such things as regulating behaviour, in its sights. It might therefore be that researchers are sometimes talking past each other. Access consciousness is not reflexivity, by the way, although reflexivity might be seen as a special kind of access. Block counts phenomenality, reflexivity, and access as three distinct concepts.
Of course, either kind of explanation – physicalist or functionalist – implies that there’s something more going on than just plain neural correlates, so in a sense whichever way you go the real drama is still offstage. My instincts tell me that Block is doing things backwards; he should have started with access consciousness and worked towards the phenomenal. But as I say it is a meaty entry for an encyclopaedia, one I haven’t nearly done justice to; see what you make of it.

 

.

Can we, one day, understand how the neurology of the brain leads to conscious minds, or will that remain impossible?

Round here we mostly discuss the mind from a top-down, philosophical perspective; but there is another way, which is to begin by understanding the nuts and bolts and then gradually working up to more complex processes. This Scientific American piece gives a quick view of how research at the neuronal level is coming along (quite well, but with vastly more to do).

Is this ever going to tell us about consciousness, though? A point often quoted by pessimists is that we have had the complete ‘wiring diagram’ of the roundworm Caenorhabditis elegans for years (Caenorhabditis has only just over 300 neurons and they have all been mapped) but still cannot properly explain how it works. Apparently researchers have largely given up on this puzzle for now. Perhaps Caenorhabditis is just too simple; its nervous system might be quirky or use elegant but opaque tricks that make it particularly difficult to fathom. Instead researchers are using fruit fly larvae and other creatures with nervous systems that are simple enough to deal with, but large enough to suggest that they probably work in a generic way, one that is broadly standard for all nervous systems up to and including the human. With modern research techniques this kind of approach is yielding some actual progress.

How optimistic can we be, though? We can never understand the brain by knowing the simultaneous states of all its neurons, so the hope of eventual understanding rests on the neurology of the brain being legible at some level. We hope there will turn out to be functions that get repeated, that firm building blocks of some intelligible structure; that we will be able to deduce rules or a kind if grammar which will let us see how things work on a slightly higher level of description.

This kind of structure is built into machines and programs; they are designed to be legible by human beings and lend themselves to reverse engineering. But the brain was not designed and is under no obligation to construct itself according to regular plans and principles. Our hope that it won’t turn out to be a permanently incomprehensible tangle rests on several possibilities.

First, it might just turn out to be like that. The computer metaphor encourages us to think that the brain must encode its information in regular ways (though the lack of anything strongly analogous to software is arguably a fly in the ointment). Perhaps we’ll just get lucky. When the structure of DNA was discovered, it really seemed as if we’d had a stroke of luck of this kind. What amounted to a long string of four repeated characters, ones that given certain conditions could be read as coding for many different proteins; it looked like we had a really clear legible system of very general significance. It still does to a degree, but my impression is that the glad confident morning is over, and now the more we learn about genetics the more complex and messy it gets. But even if we take it that genetics is a perfect example of legibility, there’s no particular reason to think that the connectome will be as tractable as the genome.

The second reason to be cheerful is that legibility might flow naturally from function. That is, after all, pretty much what happens with organs other than the brain. The heart is not mysterious, because it has a clear function and its structure is very legible in engineering terms in the light of that function. The brain is a good deal more complex than that, but on the other hand we already know of neurons and groups of neurons that do intelligibly carry out functions in our sensory or muscular systems.

There are big problems when it comes to the higher cognitive functions though. First, we don’t already understand consciousness the way we already understand pumps and levers. When it comes to the behaviour of fruit fly larvae, even, we can relate inputs and outputs to neural activity in a sensible way. For conscious thought it may be difficult to tell which neurons are doing it without already knowing what it is they’re doing. It helps a lot that people can tell us about conscious experience, though when it comes to subjective, qualities experience we have to remember that Zombie Twin tells us about his experiences too, though he doesn’t have any. (Then again, since he’s the perfect counterpart of a non-zombie, how much does it matter?)

Second, conscious processing is clearly non-generic in a way that nothing else in our bodies appears to be. Muscle fibres contract, and one does it much like another. Our lungs oxygenate our blood, and there’s no important difference between bronchi. Even our gut behaves pretty generically; it copes magnificently with a bizarre variety of inputs, but it reduces them all to the same array of nutrients and waste.

The conscious mind is not like that. It does not secrete litres of undifferentiated thought, producing much the same stuff every day and whatever we feed it with. On the contrary, its products are minutely specific – and that is the whole point. The chances of our being able to identify a standard thought module, the way we can identify standard functions elsewhere, are correspondingly slight as a result.

Still, one last reason to be cheerful; one thing the human brain is exceptionally good at is intuiting patterns from observations; far better than it has any right to be. It’s not for nothing that ‘seeing’ is literally the verb fir vision and metaphorically the verb for understanding. So exhibiting patterns of neural activity might just be the way to trigger that unexpected insight that opens the problem out…

I finally got round to seeing Split, the M. Night Shyamalan film (spoilers follow) about a problematic case of split personality, and while it’s quite a gripping film with a bravura central performance from James McAvoy,  I couldn’t help feeling that in various other ways it was somewhere between unhelpful and irresponsible. Briefly, in the film we’re given a character suffering from Dissociative Identity Disorder (DID), the condition formerly known as ‘Multiple Personality Disorder’.  The working arrangement reached by his ‘alters’, the different personalities inhabiting an unfortunate character called Kevin Wendell Crumb, has been disturbed by two of the darker alters (there are 23); he kidnaps three girls and it gradually becomes clear that the ‘Beast’, a further (24th) alter is going to eat them.

DID has a chequered and still somewhat controversial history. I discussed it at moderate length here (Oh dear, tempus fugit) about eleven years ago. One of the things about it is that its incidence is strongly affected by cultural factors. It’s very much higher in some countries than others, and the appearance of popular films or books about it seems to have a major impact, increasing the number of diagnoses in subsequent years. This phenomenon apparently goes right back to Jekyll and Hyde, an early fictional version which remains powerful in Anglophone culture. In fact Split itself draws on two notable features of Jekyll and Hyde: the ideas that some alters are likely to be wicked, and that they may differ in appearance and even size from the original. The number of cases in the US rose dramatically after the TV series Sybil, based on a real case, first aired (though subsequently doubts about the real-world diagnosis have emerged). It’s also probable that the popular view has been influenced by the persistent misunderstanding that schizophrenia is having a  ‘split personality’ (it isn’t, although it’s not unknown for DID patients to have schizophrenia too: and some ‘Schneiderian’ symptoms – voices, inserted thoughts – may confusingly arise from either condition.

One view is that while DID is undeniably a real mental condition, it is largely or wholly iatrogenic: caused by the doctors. On this view therapists trying to draw out alters for the best of reasons may simply be encouraging patients to confabulate them, or indeed the whole problem. On this view the cultural background may be very important in preparing the minds of patients (and indeed the minds of therapists: let’s be honest, psychologists watch stupid films too). So the first charge against Split is that it is likely to cause another spike in the number of DID cases.

Is that a bad thing, though? One argument is that cultural factors don’t cause the dissociative problems, they merely lead to more of them being properly diagnosed. One mainstream modern view sees DID as a response to childhood trauma; the sufferer generates a separate persona to deal with the intolerable pain. And often enough it works; we might see DID less as a mental problem and more as a strategy, often successful, for dealing with certain mental problems. There’s actually no need to reintegrate the alters, any more than you would try to homogenise any other personality; all you need to do is reach a satisfactory working arrangement. If that’s the case then making knowledge of DID more widely available might actually be a good thing.

That might be an arguable position, though we’d have to take some account of the potential for disruptive and amnesiac episodes that may come along with DID. However, Split can hardly be seen as making a valuable contribution to awareness because of the way it draws on Jekyll and Hyde tropes. First, there’s the renewed suggestion that alters usually include terrifically evil personalities. The central character in Split is apparently going to become a super-villain in a sequel. This will be a ‘grounded’ super; one whose powers are not attributable to the semi-magic effects of radiation or film-style mutation, but ‘realistically’ to DID. Putting aside the super powers, I don’t know of any evidence that people with DID have a worse criminal record than anyone else; if anything I’d guess that coping with their own problems leaves them no time or capacity for  embarking on crime sprees. But portraying them as inherently bad inevitably stigmatises existing patients and deters future diagnoses in ways that are surely offensive and unhelpful. It might even cause some patients to think that their alters have to behave badly in order to validate their diagnosis.

Of course, Hollywood almost invariably portrays mental problems as hidden superpowers. Autism makes you a mathematical genius; OCD means you’re really tidy and well-organised. But the suggestion that DID probably makes you a wall-climbing murderer is an especially negative one.  Zombies, those harmless victims of bizarre Caribbean brainwashing, possibly got a similarly negative treatment when they were transformed by Romero into brain-munching corpse monsters; but luckily I think that diagnosis is rare.

The other thing about Split is that it takes some of the wilder claims about the physical impact of DID and exaggerates them to absurdity. The psychologist in the film, Dr. Karen Fletcher merely asserts that the switch between alters can change people’s body chemistry: fine, getting into an emotional state changes that. But it emerges that Kevin’s eyesight, size and strength all change with his alters: one of them even needs insulin injections while the others don’t (a miracle that the one who needs them ever managed to manifest consistently enough to get the medication prescribed). In his final monster incarnation he becomes bigger, more muscled, able to climb walls like a fly, and invulnerable to being shot in the chest at close range (we really don’t want patients believing in that one, do we?). Remarkable in the circumstances that his one female alter didn’t develop a bulging bosom.

Anyway, you may have noticed that Hollywood isn’t the only context in which zombies have been used for other purposes and dubious stories about personal identity told. In philosophy our problems with traditional agency and responsibility have led to widespread acceptance of attenuated forms of personhood; multiple draft people, various self-referential illusions, and epiphenomenal confabulations. These sceptical views of common-sense selfhood are often discussed in a relatively positive light, as yielding a kind of Buddhist insight, or bringing a welcome relief from moral liability; but I don’t think it’s too fanciful to fear that they might also create a climate that fosters a sense of powerlessness and depersonalisation. I’d be the last person to say that philosophers should self-censor, still less that they should avoid hypotheses that look true or interesting but are depressing. Nor am I suffering from the delusion that the public at large, or even academic psychologists, are waiting eagerly to hear what the philosophers think. But perhaps there’s room for slightly more awareness that these are not purely academic issues?

Marcin Milkowski has produced a short survey of arguments against computationalism; his aim is in fact to show that they all fail and that computationalism is likely to be both true and non-trivial. The treatment of each argument is very brief; the paper could easily be expanded into a substantial book. But it’s handy to have a comprehensive list, and he does seem to have done a decent job of covering the ground.

There are a couple of weaknesses in his strategy. One is just the point that defeating arguments against your position does not in itself establish that your position is correct. But in addition he may need to do more than he thinks. He says computationalism is the belief ‘that the brain is a kind of information-processing mechanism, and that information-processing is necessary for cognition’. But I think some would accept that as true in at least some senses while denying that information-processing is sufficient for consciousness, or characteristic of consciousness. There’s really no denying that the brain does computation, at least for some readings of ‘computation’ or some of the time. Indeed, computation is an invention of the human mind and arguably does not exist without it. But that does not make it essential. We can register numbers with our fingers, but while that does mean a hand is a digital calculator, digital calculation isn’t really what hands do, and if we’re looking for the secret of manipulation we need to look elsewhere.

Be that as it may, a review of the arguments is well worthwhile. The first objection is that computationalism is essentially a metaphor; Milkowski rules this out by definition, specifying that his claim is about literal computation. The second objection is that nothing in the brain seems to match the computational distinction between software and hardware. Rather than look for equivalents, Milkowski takes this one on the nose rather, arguing that we can have computation without distinguishing software and hardware. To my mind that concedes quite a large gulf separating brain activity from what we normally think of as computation.

No concessions are needed to dismiss the idea that computers merely crunch numbers; on any reasonable interpretation they do other things too by means of numbers, so this doesn’t rule out their being able to do cognition. More sophisticated is the argument that computers are strictly speaking abstract entities. I suppose we could put the case by saying that real computers have computerhood in the light of their resemblance to Turing machines, but Turing machines can only be approximated in reality because they have infinite tape and and move between strictly discrete states, etc. Milkowski is impatient with this objection – real brains could be like real computers, which obviously exist – but reserves the question of whether computer symbols mean anything. Swatting aside the objection that computers are not biological, it’s this interesting point about meaning that Milkowski tackles next.

He approaches the issue via the Chinese Room thought experiment and the ‘symbol grounding problem’. Symbols mean things because we interpret them that way, but computers only deal with formal, syntactic properties of data; how do we bridge the gap? Milkowski does not abandon hope that someone will naturalise meaning effectively, and mentions the theories of Millikan and Dretske. But in  the meantime, he seems to feel we can accommodate some extra function to deal with meaning without having to give up the idea that cognition is essentially computational. That seems too big a concession to me, but if Milkowski set out his thinking in more depth it might perhaps be more appealing than it seems on brief acquaintance. Milkowski dismisses as a red herring Robert Epstein’s argument from the inability of the human mind to remember what’s on a dollar bill accurately (the way a computational mind would).

The next objection, derived from Gibson and Chemero, apparently says that people do not process information, they merely pick it up. This is not an argument I’m familiar with, so I might be doing it an injustice, but Milkowski’s rejection seems sensible; only on some special reading of ‘processing’ would it seem likely that people don’t process information.

Now we come to the argument that consciousness is not computational; that computation is just the wrong sort of process to produce consciousness. Milkowski traces it back to Leibniz’s famous mill argument; the moving parts of a machine can never produce anything like experience. Perhaps we could put in the same camp Brentano’s incredulity and modern mysterianism; Milkowski mentions Searle’s assertion that consciousness can only arise from biological properties, not yet understood. Milkowski complains that if accepted, this sort of objection seems to bar the way to any reductive explanation (some of his opponents would readily bite that bullet).

Next up is an objection that computer models ignore time; this seems again to refer to the discrete states of Turing machines, and Milkowski dismisses it similarly. Next comes the objection that brains are not digital. There is in fact quite a lot that could be said on either side here, but Milkowski merely argues that a computer need not be digital. This is true, but it’s another concession; his vision of brain computationalism now seems to be of analogue computers with no software; I don’t think that’s how most people read the claim that ‘the brain is a computer’. I think Milkowski is more in tune with most computationalists in his attitude to arguments of the form ‘computers will never be able to x’ where x has been things like playing chess. Historically these arguments have not fared well.

Can only people see the truth? This is how Milkowski describes the formal argument of Roger Penrose that only human beings can transcend the limitations which every formal system must have, seeing the truth of premises that cannot be proved within the system. Milkowski invokes arguments about whether this transcendent understanding can be non-contradictory and certain, but at this level of brevity the arguments can really only be gestured at.

The objection Milkowski gives most credit to is the claimed impossibility of formalising common sense. It is at least very difficult, he concedes, but we seem to be getting somewhere. The objection from common sense is a particular case of a more general one which I think is strong; formal processes like computation are not able to deal with the indefinite realms that reality presents. It isn’t just the Frame Problem; computation also fails with the indefinite ambiguity of meaning (the same problem identified for translation by Quine); whereas human communication actually exploits the polyvalence of meaning through the pragmatics of normal discourse, rich in Gricean implicatures.

Finally Milkowski deals with two radical arguments. The first says that everything is a computer; that would make computationalism true, but trivial. Well, says Milkowski, there’s computation and computation; the radical claim would make even normal computers trivial, which we surely don’t want. The other radical case is that nothing is really a computer, or rather that whether anything is a computer is simply a matter of interpretation. Again this seems too destructive and too sweeping. If it’s all a matter of interpretation, says Milkowski, why update your computer? Just interpret your trusty Windows Vista machine as actually running Windows 10 – or macOS, why not?

 

 

If there’s one thing philosophers of mind like more than an argument, it’s a rattling good yarn. Obviously we think of Mary the Colour Scientist, Zombie Twin (and Zimboes, Zomboids, Zoombinis…) , the Chinese Room (and the Chinese Nation), Brain in a Vat, Swamp-Man, Chip-Head, Twin Earth and Schmorses… even papers whose content doesn’t include narratives at this celebrated level often feature thought-experiments that are strange and piquant. Obviously philosophy in general goes in for that kind of thing too – just think of the trolley problems that have been around forever but became inexplicably popular in the last year or so (I was probably force-fed too many at an impressionable age, and now I can’t face them – it’s like broccoli, really): but I don’t think there’s another field that loves a story quite like the Mind guys.

I’ve often alluded to the way novelists have been attacking the problems of minds by other means ever since the James Boys (Henry and William) set up their pincer movement on the stream of consciousness; and how serious novelists have from time to time turned their hand to exploring the theme consciousness with clear reference to academic philosophy, sometimes even turning aside to debunk a thought experiment here and there. We remember philosophically  considerable works of genuine science fiction such as  Scott Bakker’s Neuropath. We haven’t forgotten how Ian  and Sebastian Faulks in their different ways made important contributions to the field of Bogus but Totally Convincing Psychology with De Clérambault’s Syndrome and Glockner’s Isthmus, nor David Lodge’s book ‘Consciousness and the Novel’ and his novel Thinks. And philosophers have not been averse to writing the odd story, from Dan Lloyd’s novel Radiant Cool up to short stories by many other academics including Dennett and Eric Schwitzgebel.

So I was pleased to hear (via a tweet from Eric himself) of the inception of an unexpected new project in the form of the Journal of Science Fiction and Philosophy. The Journal ‘aims to foster the appreciation of science fiction as a medium for philosophical reflection’.   Does that work? Don’t science fiction and philosophy have significantly different objectives? I think it would be hard to argue that all science fiction is of philosophical interest (other than to the extent that everything is of philosophical interest). Some space opera and a disappointing amount of time travel narrative really just consists of adventure stories for which the SF premise is mere background. Some science fiction (less than one might expect) is actually about speculative science. But there is quite a lot that could almost as well be called Phifi as Scifi, stories where the alleged science is thinly or unconvincingly sketched, and simply plays the role of enabler for an examination of social, ethical, or metaphysical premises. You could argue that Asimov’s celebrated robot short stories fit into this category; we have no idea how positronic brains are supposed to work, it’s the ethical dilemmas that drive the stories.

There is, then, a bit of an overlap; but surely SF and philosophy differ radically in their aims? Fiction aims only to entertain; the ideas can be rubbish so long as they enable the monsters or, slightly better, boggle the mind, can’t they? Philosophy uses stories only as part of making a definite case for the truth of particular positions, part of an overall investigative effort directed, however indirect the route, at the real world? There’s some truth in that, but the line of demarcation is not sharp. For one thing, successful philosophers write entertainingly; I do not think either Dennett or Searle would have achieved recognition for their arguments so easily if they hadn’t been presented in prose clear enough for non-academic readers to  understand, and well-crafted enough to make them enjoy the experience.  Moreover, philosophy doesn’t have to present the truth; it can ask questions or just try to do some of that  mind boggling. Myself when I come to read a philosophical paper I do not expect to find the truth (I gave up that kind of optimism along with the broccoli): my hopes are amply fulfilled if what I read is interesting. Equally, while fiction may indeed consist of amusing lies, novelists are not indifferent to the truth, and often want to advance a hypothesis, or at least, have us entertain one.

I really think some gifted novelist should take the themes of the famous thought-experiments and attempt to turn them into a coherent story. Meantime. there is every prospect that the new journal represents, not dumbing down but wising up, and I for one welcome our new peer-reviewers.

Does the unconscious exist? David B. Feldman asks, and says no.  He points out that the unconscious and its influence is a cornerstone of Freudian and other theories, where it is quoted as the explanation for our deeper motivation and our sometimes puzzling behaviour.  It may send messages through dreams and other hints, but we have no direct access to it and cannot read its thoughts, even though they may heavily influence our personality.

Freud’s status as an authority is perhaps not what it once was, but the unconscious is widely accepted as a given, pretty much part of our everyday folk-psychology understanding of our own minds. I think if you asked, a majority of people would say they had had direct experience of their own unconscious knowledge or beliefs affecting the way they behaved.  Many psychological experiments have demonstrated ‘priming’ effects, where the subject’s choices are affected by things they have been told or shown previously (although some of these may be affected by the reproducibility problems that have beset psychological research recently, I don’t think the phenomenon of priming in general can be dismissed). Nor is it a purely academic matter. Unconscious bias is generally held to be a serious problem, responsible for the perpetuation of various kinds of discrimination by people who at a conscious level are fair-minded and well-meaning.

Feldman, however, suggests that the unconscious is neither scientifically testable nor logically sound.  It may well be true that psychoanalytic explanations are scientifically slippery; mistaken predictions about a given subject can always be attributed to a further hidden motivation or complex, so that while one interpretation can be proved false, the psychoanalytic model overall cannot be.  However, more generally there is good scientific evidence for unconscious influences on our behaviour as I’ve mentioned, so perhaps it depends on what kind of unconscious we’re talking about.  On the logical front, Feldman suggests that the unconscious is an ‘homunculus’; an example of the kind of explanation that attributes some mental functions to ‘a little man in your head’, a mental module that is just assumed to be able to do whatever a whole brain can do. He quite rightly says that homuncular theories merely defer explanation in a way which is most often useless and unjustified.

But is he right? On one hand people like Dennett, as we’ve discussed in the past, have said that homuncular arguments may be alright with certain provisos; on the other hand, is it clear that the unconscious really is an homuncular entity?  The key question, I think, is whether the unconscious is an entity that is just assumed to do all the sorts of things a complete mind would do. If we stick to Freud, Feldman’s charges may have substance; the unconscious seems to have desires and motivations, emotions, and plans; it understands what is going on in our lives pretty well and can make intelligently targeted interventions and encode messages in complex ways. In a lot of ways it is like a complete person – or rather, like three people: id, ego, and superego. A Freudian might argue over that; however, in the final analysis it’s not the decisive issue because we’re not bound to stick to a Freudian or psychoanalytic reading of the unconscious anyway. Again, it depends what kind of unconscious we’re proposing. We could go for a much simpler version which does some basic things for us but at a level far below that a real homunculus. Perhaps we could even speak loosely of an unconscious if it were no more than the combined effect of many separate mental features?

In fact, Feldman accepts all this. He is quite comfortable with our doing things unconsciously, he merely denies the existence of the unconscious as a distinct coherent thinking entity. He uses the example of driving along a familiar route; we perform perfectly, but afterwards cannot remember doing the steering or changing gear at any stage. Myself I think this is actually a matter of memory, not inattention while actually driving – if we were stopped at any point in the journey I don’t think we would have to snap out of some trance-like state; it’s just that we don’t remember. But in general Feldman’s position seems entirely sensible.

There is actually something a little odd in the way we talk about unconsciousness. Virtually everything is unconscious, after all. We don’t remark on the fact that muscles or the gut do their job without being conscious; it’s the unique presence of consciousness in mental activity that is worthy of mention. So why do we even talk about unconscious functions, let alone an unconscious?

Neurocritic asks a great question here, neatly provoking that which he would have defined – thought. What is thought, and what are individual thoughts? He quotes reports that we have an estimated 70,000 thoughts a day and justly asks how on earth anyone knows. How can you count thoughts?

Well, we like a challenge round here, so what is a thought? I’m going to lay into this one without showing all my working (this is after all a blog post, not a treatise), but I hope to make sense intermittently. I will start by laying it down axiomatically that a thought is about or of something. In philosophical language, it has intentionality. I include perceptions as thoughts, though more often when we mention thoughts we have in mind thoughts about distant, remembered or possible things rather than ones that are currently present to our senses. We may also have in mind thoughts about perceptions or thoughts about other thoughts – in the jargon, higher-order thoughts.

Now I believe we can say three things about a thought at different levels of description. At an intuitive level, it has content. At a psychological level it is an act of recognition; recognition of the thing that forms the content. And at a neural level it is a pattern of activity reliably correlated with the perception, recollection, or consideration of the thing that forms the content; recognition is exactly this chiming of neural patterns with things (What exactly do I mean by ‘chiming of neural patterns’? No time for that now, move along please!). Note that while a given pattern of neural activity always correlates with one thought about one thing, there will be many other patterns of neural activity that correlate with slightly different thoughts about that same thing – that thing in different contexts or from different aspects. A thought is not uniquely identifiable by the thing it is about (we could develop a theory of broader content which would uniquely identify each thought, but that would have weird consequences so let’s not). Note also that these ‘things’ I speak of may be imaginary or abstract entities as well as concrete physical objects: there are a lot of problems connected with that which I will ignore here.

So what is one thought? It’s pretty clear intuitively that a thought may be part of a sequence which itself would also normally be regarded as a thought. If I think about going to make a cup of tea I may be thinking of putting the kettle on, warming the pot, measuring out the tea, and so on; I’ve had several thoughts in one way but in another the sequence only amounts to a thought about making tea. I may also think about complex things; when I think of the teapot I think of handle, spout, and so on. These cases are different in some respects, though in my view they use the same mechanism of linking objects of thought by recognising an over-arching entity that includes them. This linkage by moving up and down between recognition of larger and smaller entities is in my view what binds a train of thought together. Sitting here I perceive a small sensation of thirst, which I recognise as a typical initial stage of the larger idea of having a drink. One recognisable part of having a drink may be making the tea, part of which in turn involves the recognisable actions of standing up, going to the kitchen… and so on. However, great care must be taken here to distinguish between the things a thought contains and the things it implies. If we allow implication then every thought about a cup of tea implies an indefinitely expanding set of background ideas and every thought has infinite content.

Nevertheless, the fact that sequences can be amalgamated suggests that there is no largest possible thought. We can go on adding more elements. There’s a strong analogy here with the formation of sentences when speaking or writing. A thought or a sentence tends to run to a natural conclusion after a while, but this seems to arise partly because we run out of mental steam, and partly because short thoughts and short sentences are more manageable and can together do anything that longer ones can do. In principle a sentence could go on indefinitely, and so could a thought. Indeed, since the thread of relevance is weakened but not (we hope) lost at each junction between sentences or thoughts, we can perhaps regard whole passages of prose as embodying a single complex thought. The Decline and Fall of the Roman Empire is arguably a single massively complicated thought that emerged from Gibbon’s brain over an unusually extended period, having first sprung to mind as he ‘sat musing amidst the ruins of the Capitol, while the barefoot friars were singing vespers in the Temple of Jupiter’.

Parenthetically I throw in the speculation that grammatical sentence structure loosely mirrors the structure of thought; perhaps particular real world grammars emerge from the regular bashing together of people’s individual mental thought structures, with all the variable compromise and conventionaljsation that that would involve.

Is there a smallest possible thought? If we can go on putting thoughts together indefinitely, like more and more complex molecules, is there a level at which we get down to thoughts like atoms, incapable of further division without destruction?

As we enter this territory, we walk among the largely forgotten ruins of some grand projects of the past. People as clever as Leibniz once thought we might manage to define a set of semantic primitives, basic elements out of which all thoughts must be built. The idea, intuitively, was roughly that we could take the dictionary and define each word in terms of simpler ones; then define the words in the definitions in ones that were simpler still, until we boiled everything down to a handful of basics which we sort of expected to be words encapsulating elementary concepts of physics, ethics, maths, and so on.

Of course, it didn’t work. It turns out that the process of definition is not analytical but expository. At the bottom level our primitives turn out to contain concepts from higher layers; the universe by transcendence and slippery lamination eludes comprehensive categorisation. As Borges said:

It is clear that there is no classification of the Universe that is not arbitrary and full of conjectures. The reason for this is very simple: we do not know what kind of thing the universe is. We can go further; we suspect that there is no universe in the organic, unifying sense of that ambitious word.

That doesn’t mean there is no smallest thought in some less ambitious sense. There may not be primitives, but to resurrect the analogy with language, there might be words.  If, as I believe, thoughts correlate with patterns of neural activity, it follows that although complex thoughts may arise from patterns that evolve over minutes or even years (like the unimaginably complex sequence of neural firing that generated Gibbon’s masterpiece), we could in principle look at a snapshot and have our instantaneous smallest thought.

It still isn’t necessarily the case that we could count atomic thoughts. It would depend whether the brain snaps smartly between one meaningful pattern and another, as indeed language does between words, or smooshes one pattern gradually into another. (One small qualification to that is that although written and mental words seem nicely separated,  in spoken language the sound tends to be very smooshy.) My guess is that it’s more like the former than the latter (it doesn’t feel as if thinking about tea morphs gradually into thinking about boiling water, more like a snappy shift from one to the other), but it is hard to be sure that that is always the case. In principle it’s a matter that could be illuminated or resolved by empirical research, though that would require a remarkable level of detailed observation. At any rate no-one has counted thoughts this way yet and perhaps they never will.


This is completely off-topic, and a much more substantial piece than my usual posts. However, this discussion at Aeon prompted me to put forward some thoughts on similar issues which I wrote a while go.  I hope this is interesting, but in any case normal service will resume in a couple of days…


Debt problems beset the modern world,  from unpayable mortgages and the banking crises they precipitate, through lives eroded by unmanageable loans, the sovereign debt problems that have threatened the stability of Europe, to the vast interest repayments that quietly cancel out much of the aid given to some developing countries. Debt is arguably the modern economic problem. It is the millstone round our necks; yet we are not, it seems, to blame those who put it there. Debtors are not seen as the victims of a poorly designed, one-sided deal, but as the architects of their own prison. It is widely accepted that they face an absolute moral duty to pay up, irrespective of capacity or consequences. The debtor now bears all the blame, although once it would have been the lenders who were shamed.
That would have been so because usury was once accounted a sin, and while the word now implies extortionate terms, in those days it simply meant the lending of money at interest – any rate of interest.  The moral intuition behind that judgement is clear – if you gave some money, you were entitled to the same amount back: no less, no more. Payment and repayment should balance: if you demanded interest, you were taking something for nothing. The lenders did no work, added no new goods to the world, and suffered no inconvenience while the gold was out of their counting house. Indeed, while someone else held their money  they were freed from the nagging fear of theft and the cost and inconvenience of guarding their gold. Why should they profit?

From a twenty-first century perspective that undeniably seems naive. Interest is such a basic part of the economic technology underlying the modern world that to give it up appears mad: we might as well contemplate doing without electricity. The very word ‘usury’ has a fustian, antiquarian sound, with some problematic associations lurking in the background. An exploded concept, then, an archaic word; a sin we’re well rid of?

Yet our problems with debt surely suggest that there is an element of truth lurking in the older consensus after all; that there is a need for a strong concept of improper lending.  Isn’t there after all something wrong with a view that blames only one party to a lending plan that has gone disastrously off track? Shouldn’t the old sin now be raised from its uneasy sleep: shouldn’t usury, suitably defined, be anathematised once more, as it was in earlier times? Continue reading ‘Against Usury’ »

Where is consciousness? It’s out there, apparently, not in here. There has been an interesting dialogue series going on between Riccardo Manzotti and Tim Parks in the NYRB (thanks to Tom Clark for drawing my attention to it) The separate articles are not particularly helpfully laid out or linked to each other: the series is

http://www.nybooks.com/daily/2016/11/21/challenge-of-defining-consciousness/
http://www.nybooks.com/daily/2016/12/08/color-of-consciousness/
http://www.nybooks.com/daily/2016/12/30/consciousness-does-information-smell/
http://www.nybooks.com/daily/2017/01/26/consciousness-the-ice-cream-problem/
http://www.nybooks.com/daily/2017/02/22/consciousness-am-i-the-apple/
http://www.nybooks.com/daily/2017/03/16/consciousness-mind-in-the-whirlwind/
http://www.nybooks.com/daily/2017/04/20/consciousness-dreaming-outside-our-heads/
http://www.nybooks.com/daily/2017/05/11/consciousness-the-body-and-us/
http://www.nybooks.com/daily/2017/06/17/consciousness-whos-at-the-wheel/

We discussed Manzotti’s views back in 2006, when with Honderich and Tonneau he represented a new wave of externalism. His version seemed to me perhaps the clearest and most attractive back then (though I think he’s mistaken). He continues to put some good arguments.

In the first part, Manzotti says consciousness is awareness, experience. It is somewhat mysterious – we mustn’t take for granted any view about a movie playing in our head or the like – and it doesn’t feature in the scientific account. All the events and processes described by science could, it seems, go on without conscious experience occurring.

He is scathing, however, about the view that consciousness is therefore special (surely something that science doesn’t account for can reasonably be seen as special?), and he suggests the word “mental” is a kind of conceptual dustbin for anything we can’t accommodate otherwise. He and Parks describe the majority of views as internalist, dedicated to the view that one way or another neural activity just is consciousness. Many neural correlates of consciousness have been spotted, says Manzotti, but correlates ain’t the thing itself.

In the second part he tackles colour, one of the strongest cards in the internalist hand. It looks to us as if things just have colour as a simple property, but in fact the science of colour tells us it’s very far from being that simple. For one thing how we perceive a colour depends strongly on what other colours are adjacent; Manzotti demonstrates this with a graphic where areas with the same RGB values appear either blue or green. Examples like this make it very tempting to conclude that colour is constructed in the brain, but Manzotti boldly suggests that if science and ordinary understanding are at odds, so much the worse for science. Maybe we ought to accept that those colours really are different, and be damned to RGB values.

The third dialogue attacks the metaphor of a computer often applied to the brain, and rejects talk of information processing. Information is not a physical thing, says Manzotti, and to speak of it as though it were a visible fluid passing through the brain risks dualism; something Tononi, with his theory of integrated information, accepts; he agrees that his ideas about information having two aspects point that way.

So what’s a better answer? Manzotti traces externalist ideas back to Aristotle, but focuses on the more ideas of affordances and enactivism. An affordance is roughly a possibility offered to us by an object; a hammer offers us the possibility of hitting nails. This idea of bashing things does not need to be represented in the head, because it is out there in the form of the hammer. Enactivism develops a more general idea of perception as action, but runs into difficulties in some cases such as that of dreams, where we seem to have experience without action; or consider that licking a strawberry or a chocolate ice cream is the same action but yields very different experience.

To set out his own view, Manzotti introduces the ‘metaphysical switchboard’: one switch toggles whether subject and object are separate, the other whether the subject is physical or not. If they’re separate, and we choose to make the subject non-physical, we get something like Cartesian dualism, with all the problems that entails. If we select ‘physical’ then we get the view of modern science; and that too seems to be failing. If subject and object are neither separate nor physical, we get Berkleyan idealism; my perceptions actually constitute reality. The only option that works is to say that subject and object are identical, but physical; so when I see an apple, my experience of it is identical with the apple itself. Parks, rightly I think, says that most people will find this bonkers at first sight. But after all, the apple is the only thing that has apple-like qualities! There’s no appliness in my brain or in my actions.

This raises many problems. My experience of the apple changes according to conditions, yet the apple itself doesn’t change. Oh no? says Manzotti, why not? You’re just clinging to the subject/object distinction; let it go and there’s no problem. OK, but if my experience of the apple is identical with the apple, and so is yours, then our experiences must be identical. In fact, since subject and object are the same, we must also be identical!

The answer here is curious. Manzotti points out that the physical quality of velocity is relative to other things; you may be travelling at one speed relative to me but a different one compared to that train going by. In fact, he says, all physical qualities are relative, so the apple is an apple experience relative to one animal (me) and at the same time relative to another in a different way. I don’t think this ingenious manoeuvre ultimately works; it seems Manzotti is introducing an intermediate entity of the kind he was trying to dispel; we now have an apple-experience relative to me which is different from the one relative to you. What binds these and makes them experiences of the same apple? If we say nothing, we fall back into idealism; if it’s the real physical apple, then we’re more or less back with the traditional framework, just differently labelled.

What about dreams and hallucinations? Manzotti holds that they are always made up out of real things we have previously experienced. Hey, he says, if we just invent things and colour is made in the head, how come we never dream new colours? He argues that there is always an interval between cause and effect when we experience things; given that, why shouldn’t real things from long ago be the causes of dreams?

And the self, that other element in the traditional picture? It’s made up of all the experiences, all the things experienced, that are relative to us; all physical, if a little scattered and dare I say metaphysically unusual; a massive conjunction bound together by… nothing in particular? Of course the body is central, and for certain feelings, or for when we’re in a dark, silent room, it may be especially salient. But it’s not the whole thing, and still less is the brain.

In the latest dialogue, Manzotti and Parks consider free will. For Manzotti, having said that you are the sum of your experiences, it is straightforward to say that your decisions are made by the subset of those experiences that are causally active; nothing that contradicts determinist physics, but a reasonable sense in which we can say your act belonged to you. To me this is a relatively appealing outlook.

Overall? Well, I like the way externalism seeks to get rid of all the problems with mediation that lead many people to think we never experience the world, only our own impressions of it. Manzotti’s version is particularly coherent and intelligible. I’m not sure his clever relativity finally works though. I agree that experience isn’t strictly in the brain, but I don’t think it’s in the apple either; to talk about its physical location is just a mistake. The processes that give rise to experience certainly have a location, but in itself it just doesn’t have that kind of property.

Another strange side light on free will. Some of the most-discussed findings in the field are Libet’s celebrated research which found that Readiness Potentials (RPs) in the brain showed when a decision to move had been made, significantly before the subject was aware of having decided. Libet himself thought this was problematic for free will, but that we could still have ‘Free Won’t’ – we could still change our minds after the RP had appeared and veto the planned movement.

A new paper (discussed here by Jerry Coyne) follows up on this, and seems to show that while we do have something like this veto facility, there is a time limit on that too, and beyond a certain point the planned move will be made regardless.

The actual experiment was in three phases. Subjects were given a light and a pedal and set up with equipment to detect RPs in their brain. They were told to press the pedal at a time of their choosing when the light was green, but not when it had turned red. The first run merely trained the equipment to detect RPs, with the light turning red randomly. In the second phase, the light turned red when an RP was detected, so that the subjects were in effect being asked to veto their own decision to press. In the third phase, they were told that their decisions were being predicted and they were asked to try to be unpredictable.

Detection of RPs actually took longer in some instances than others. It turned out that where the RP was picked up early enough, subjects could exercise the veto; but once the move was 200ms or less away, it was impossible to stop.

What does this prove, beyond the bare facts of the results? Perhaps not much. The conditions of the experiment are very strange and do not resemble everyday decision-making very much at all. It was always an odd feature of Libet’s research that subjects were asked to get ready to move but choose the time capriciously according to whim; not a mental exercise that comes up very often in real life. In the new research, subjects further have to stop when the light is red; they don’t, you notice, choose to veto their move, but merely respond to a pre-set signal. Whether this deserves to be called free won’t is debatable; it isn’t a free decision making process. How could it be, anyway; how could it be that deciding to do something takes significantly longer than deciding not to do the same thing? Is it that decisions to move are preceded by an RP, but other second-order decisions about those decisions are not? We seem to be heading into a maze of complications if we go that way and substantially reducing the significance of Libet’s results.

Of course, if we don’t think that Libet’s results dethrone free will in the first place, we need not be very worried. My own view is that we need to distinguish between making a conscious decision and becoming aware of having made the decision. Some would argue that that second-order awareness is essential to the nature of conscious thought, but I don’t think so. For me Libet’s original research showed only that deciding and knowing you’ve decided are distinct, and the latter naturally follows after the former. So assuming that, like me, you think it’s fine to regard the results of certain physical processes as ‘free’ in a useful sense, free will remains untouched. If you were always a sceptic then of course Libet never worried you anyway, and nor will the new research.