If there’s one thing philosophers of mind like more than an argument, it’s a rattling good yarn. Obviously we think of Mary the Colour Scientist, Zombie Twin (and Zimboes, Zomboids, Zoombinis…) , the Chinese Room (and the Chinese Nation), Brain in a Vat, Swamp-Man, Chip-Head, Twin Earth and Schmorses… even papers whose content doesn’t include narratives at this celebrated level often feature thought-experiments that are strange and piquant. Obviously philosophy in general goes in for that kind of thing too – just think of the trolley problems that have been around forever but became inexplicably popular in the last year or so (I was probably force-fed too many at an impressionable age, and now I can’t face them – it’s like broccoli, really): but I don’t think there’s another field that loves a story quite like the Mind guys.

I’ve often alluded to the way novelists have been attacking the problems of minds by other means ever since the James Boys (Henry and William) set up their pincer movement on the stream of consciousness; and how serious novelists have from time to time turned their hand to exploring the theme consciousness with clear reference to academic philosophy, sometimes even turning aside to debunk a thought experiment here and there. We remember philosophically  considerable works of genuine science fiction such as  Scott Bakker’s Neuropath. We haven’t forgotten how Ian  and Sebastian Faulks in their different ways made important contributions to the field of Bogus but Totally Convincing Psychology with De Clérambault’s Syndrome and Glockner’s Isthmus, nor David Lodge’s book ‘Consciousness and the Novel’ and his novel Thinks. And philosophers have not been averse to writing the odd story, from Dan Lloyd’s novel Radiant Cool up to short stories by many other academics including Dennett and Eric Schwitzgebel.

So I was pleased to hear (via a tweet from Eric himself) of the inception of an unexpected new project in the form of the Journal of Science Fiction and Philosophy. The Journal ‘aims to foster the appreciation of science fiction as a medium for philosophical reflection’.   Does that work? Don’t science fiction and philosophy have significantly different objectives? I think it would be hard to argue that all science fiction is of philosophical interest (other than to the extent that everything is of philosophical interest). Some space opera and a disappointing amount of time travel narrative really just consists of adventure stories for which the SF premise is mere background. Some science fiction (less than one might expect) is actually about speculative science. But there is quite a lot that could almost as well be called Phifi as Scifi, stories where the alleged science is thinly or unconvincingly sketched, and simply plays the role of enabler for an examination of social, ethical, or metaphysical premises. You could argue that Asimov’s celebrated robot short stories fit into this category; we have no idea how positronic brains are supposed to work, it’s the ethical dilemmas that drive the stories.

There is, then, a bit of an overlap; but surely SF and philosophy differ radically in their aims? Fiction aims only to entertain; the ideas can be rubbish so long as they enable the monsters or, slightly better, boggle the mind, can’t they? Philosophy uses stories only as part of making a definite case for the truth of particular positions, part of an overall investigative effort directed, however indirect the route, at the real world? There’s some truth in that, but the line of demarcation is not sharp. For one thing, successful philosophers write entertainingly; I do not think either Dennett or Searle would have achieved recognition for their arguments so easily if they hadn’t been presented in prose clear enough for non-academic readers to  understand, and well-crafted enough to make them enjoy the experience.  Moreover, philosophy doesn’t have to present the truth; it can ask questions or just try to do some of that  mind boggling. Myself when I come to read a philosophical paper I do not expect to find the truth (I gave up that kind of optimism along with the broccoli): my hopes are amply fulfilled if what I read is interesting. Equally, while fiction may indeed consist of amusing lies, novelists are not indifferent to the truth, and often want to advance a hypothesis, or at least, have us entertain one.

I really think some gifted novelist should take the themes of the famous thought-experiments and attempt to turn them into a coherent story. Meantime. there is every prospect that the new journal represents, not dumbing down but wising up, and I for one welcome our new peer-reviewers.

Does the unconscious exist? David B. Feldman asks, and says no.  He points out that the unconscious and its influence is a cornerstone of Freudian and other theories, where it is quoted as the explanation for our deeper motivation and our sometimes puzzling behaviour.  It may send messages through dreams and other hints, but we have no direct access to it and cannot read its thoughts, even though they may heavily influence our personality.

Freud’s status as an authority is perhaps not what it once was, but the unconscious is widely accepted as a given, pretty much part of our everyday folk-psychology understanding of our own minds. I think if you asked, a majority of people would say they had had direct experience of their own unconscious knowledge or beliefs affecting the way they behaved.  Many psychological experiments have demonstrated ‘priming’ effects, where the subject’s choices are affected by things they have been told or shown previously (although some of these may be affected by the reproducibility problems that have beset psychological research recently, I don’t think the phenomenon of priming in general can be dismissed). Nor is it a purely academic matter. Unconscious bias is generally held to be a serious problem, responsible for the perpetuation of various kinds of discrimination by people who at a conscious level are fair-minded and well-meaning.

Feldman, however, suggests that the unconscious is neither scientifically testable nor logically sound.  It may well be true that psychoanalytic explanations are scientifically slippery; mistaken predictions about a given subject can always be attributed to a further hidden motivation or complex, so that while one interpretation can be proved false, the psychoanalytic model overall cannot be.  However, more generally there is good scientific evidence for unconscious influences on our behaviour as I’ve mentioned, so perhaps it depends on what kind of unconscious we’re talking about.  On the logical front, Feldman suggests that the unconscious is an ‘homunculus’; an example of the kind of explanation that attributes some mental functions to ‘a little man in your head’, a mental module that is just assumed to be able to do whatever a whole brain can do. He quite rightly says that homuncular theories merely defer explanation in a way which is most often useless and unjustified.

But is he right? On one hand people like Dennett, as we’ve discussed in the past, have said that homuncular arguments may be alright with certain provisos; on the other hand, is it clear that the unconscious really is an homuncular entity?  The key question, I think, is whether the unconscious is an entity that is just assumed to do all the sorts of things a complete mind would do. If we stick to Freud, Feldman’s charges may have substance; the unconscious seems to have desires and motivations, emotions, and plans; it understands what is going on in our lives pretty well and can make intelligently targeted interventions and encode messages in complex ways. In a lot of ways it is like a complete person – or rather, like three people: id, ego, and superego. A Freudian might argue over that; however, in the final analysis it’s not the decisive issue because we’re not bound to stick to a Freudian or psychoanalytic reading of the unconscious anyway. Again, it depends what kind of unconscious we’re proposing. We could go for a much simpler version which does some basic things for us but at a level far below that a real homunculus. Perhaps we could even speak loosely of an unconscious if it were no more than the combined effect of many separate mental features?

In fact, Feldman accepts all this. He is quite comfortable with our doing things unconsciously, he merely denies the existence of the unconscious as a distinct coherent thinking entity. He uses the example of driving along a familiar route; we perform perfectly, but afterwards cannot remember doing the steering or changing gear at any stage. Myself I think this is actually a matter of memory, not inattention while actually driving – if we were stopped at any point in the journey I don’t think we would have to snap out of some trance-like state; it’s just that we don’t remember. But in general Feldman’s position seems entirely sensible.

There is actually something a little odd in the way we talk about unconsciousness. Virtually everything is unconscious, after all. We don’t remark on the fact that muscles or the gut do their job without being conscious; it’s the unique presence of consciousness in mental activity that is worthy of mention. So why do we even talk about unconscious functions, let alone an unconscious?

Neurocritic asks a great question here, neatly provoking that which he would have defined – thought. What is thought, and what are individual thoughts? He quotes reports that we have an estimated 70,000 thoughts a day and justly asks how on earth anyone knows. How can you count thoughts?

Well, we like a challenge round here, so what is a thought? I’m going to lay into this one without showing all my working (this is after all a blog post, not a treatise), but I hope to make sense intermittently. I will start by laying it down axiomatically that a thought is about or of something. In philosophical language, it has intentionality. I include perceptions as thoughts, though more often when we mention thoughts we have in mind thoughts about distant, remembered or possible things rather than ones that are currently present to our senses. We may also have in mind thoughts about perceptions or thoughts about other thoughts – in the jargon, higher-order thoughts.

Now I believe we can say three things about a thought at different levels of description. At an intuitive level, it has content. At a psychological level it is an act of recognition; recognition of the thing that forms the content. And at a neural level it is a pattern of activity reliably correlated with the perception, recollection, or consideration of the thing that forms the content; recognition is exactly this chiming of neural patterns with things (What exactly do I mean by ‘chiming of neural patterns’? No time for that now, move along please!). Note that while a given pattern of neural activity always correlates with one thought about one thing, there will be many other patterns of neural activity that correlate with slightly different thoughts about that same thing – that thing in different contexts or from different aspects. A thought is not uniquely identifiable by the thing it is about (we could develop a theory of broader content which would uniquely identify each thought, but that would have weird consequences so let’s not). Note also that these ‘things’ I speak of may be imaginary or abstract entities as well as concrete physical objects: there are a lot of problems connected with that which I will ignore here.

So what is one thought? It’s pretty clear intuitively that a thought may be part of a sequence which itself would also normally be regarded as a thought. If I think about going to make a cup of tea I may be thinking of putting the kettle on, warming the pot, measuring out the tea, and so on; I’ve had several thoughts in one way but in another the sequence only amounts to a thought about making tea. I may also think about complex things; when I think of the teapot I think of handle, spout, and so on. These cases are different in some respects, though in my view they use the same mechanism of linking objects of thought by recognising an over-arching entity that includes them. This linkage by moving up and down between recognition of larger and smaller entities is in my view what binds a train of thought together. Sitting here I perceive a small sensation of thirst, which I recognise as a typical initial stage of the larger idea of having a drink. One recognisable part of having a drink may be making the tea, part of which in turn involves the recognisable actions of standing up, going to the kitchen… and so on. However, great care must be taken here to distinguish between the things a thought contains and the things it implies. If we allow implication then every thought about a cup of tea implies an indefinitely expanding set of background ideas and every thought has infinite content.

Nevertheless, the fact that sequences can be amalgamated suggests that there is no largest possible thought. We can go on adding more elements. There’s a strong analogy here with the formation of sentences when speaking or writing. A thought or a sentence tends to run to a natural conclusion after a while, but this seems to arise partly because we run out of mental steam, and partly because short thoughts and short sentences are more manageable and can together do anything that longer ones can do. In principle a sentence could go on indefinitely, and so could a thought. Indeed, since the thread of relevance is weakened but not (we hope) lost at each junction between sentences or thoughts, we can perhaps regard whole passages of prose as embodying a single complex thought. The Decline and Fall of the Roman Empire is arguably a single massively complicated thought that emerged from Gibbon’s brain over an unusually extended period, having first sprung to mind as he ‘sat musing amidst the ruins of the Capitol, while the barefoot friars were singing vespers in the Temple of Jupiter’.

Parenthetically I throw in the speculation that grammatical sentence structure loosely mirrors the structure of thought; perhaps particular real world grammars emerge from the regular bashing together of people’s individual mental thought structures, with all the variable compromise and conventionaljsation that that would involve.

Is there a smallest possible thought? If we can go on putting thoughts together indefinitely, like more and more complex molecules, is there a level at which we get down to thoughts like atoms, incapable of further division without destruction?

As we enter this territory, we walk among the largely forgotten ruins of some grand projects of the past. People as clever as Leibniz once thought we might manage to define a set of semantic primitives, basic elements out of which all thoughts must be built. The idea, intuitively, was roughly that we could take the dictionary and define each word in terms of simpler ones; then define the words in the definitions in ones that were simpler still, until we boiled everything down to a handful of basics which we sort of expected to be words encapsulating elementary concepts of physics, ethics, maths, and so on.

Of course, it didn’t work. It turns out that the process of definition is not analytical but expository. At the bottom level our primitives turn out to contain concepts from higher layers; the universe by transcendence and slippery lamination eludes comprehensive categorisation. As Borges said:

It is clear that there is no classification of the Universe that is not arbitrary and full of conjectures. The reason for this is very simple: we do not know what kind of thing the universe is. We can go further; we suspect that there is no universe in the organic, unifying sense of that ambitious word.

That doesn’t mean there is no smallest thought in some less ambitious sense. There may not be primitives, but to resurrect the analogy with language, there might be words.  If, as I believe, thoughts correlate with patterns of neural activity, it follows that although complex thoughts may arise from patterns that evolve over minutes or even years (like the unimaginably complex sequence of neural firing that generated Gibbon’s masterpiece), we could in principle look at a snapshot and have our instantaneous smallest thought.

It still isn’t necessarily the case that we could count atomic thoughts. It would depend whether the brain snaps smartly between one meaningful pattern and another, as indeed language does between words, or smooshes one pattern gradually into another. (One small qualification to that is that although written and mental words seem nicely separated,  in spoken language the sound tends to be very smooshy.) My guess is that it’s more like the former than the latter (it doesn’t feel as if thinking about tea morphs gradually into thinking about boiling water, more like a snappy shift from one to the other), but it is hard to be sure that that is always the case. In principle it’s a matter that could be illuminated or resolved by empirical research, though that would require a remarkable level of detailed observation. At any rate no-one has counted thoughts this way yet and perhaps they never will.


This is completely off-topic, and a much more substantial piece than my usual posts. However, this discussion at Aeon prompted me to put forward some thoughts on similar issues which I wrote a while go.  I hope this is interesting, but in any case normal service will resume in a couple of days…


Debt problems beset the modern world,  from unpayable mortgages and the banking crises they precipitate, through lives eroded by unmanageable loans, the sovereign debt problems that have threatened the stability of Europe, to the vast interest repayments that quietly cancel out much of the aid given to some developing countries. Debt is arguably the modern economic problem. It is the millstone round our necks; yet we are not, it seems, to blame those who put it there. Debtors are not seen as the victims of a poorly designed, one-sided deal, but as the architects of their own prison. It is widely accepted that they face an absolute moral duty to pay up, irrespective of capacity or consequences. The debtor now bears all the blame, although once it would have been the lenders who were shamed.
That would have been so because usury was once accounted a sin, and while the word now implies extortionate terms, in those days it simply meant the lending of money at interest – any rate of interest.  The moral intuition behind that judgement is clear – if you gave some money, you were entitled to the same amount back: no less, no more. Payment and repayment should balance: if you demanded interest, you were taking something for nothing. The lenders did no work, added no new goods to the world, and suffered no inconvenience while the gold was out of their counting house. Indeed, while someone else held their money  they were freed from the nagging fear of theft and the cost and inconvenience of guarding their gold. Why should they profit?

From a twenty-first century perspective that undeniably seems naive. Interest is such a basic part of the economic technology underlying the modern world that to give it up appears mad: we might as well contemplate doing without electricity. The very word ‘usury’ has a fustian, antiquarian sound, with some problematic associations lurking in the background. An exploded concept, then, an archaic word; a sin we’re well rid of?

Yet our problems with debt surely suggest that there is an element of truth lurking in the older consensus after all; that there is a need for a strong concept of improper lending.  Isn’t there after all something wrong with a view that blames only one party to a lending plan that has gone disastrously off track? Shouldn’t the old sin now be raised from its uneasy sleep: shouldn’t usury, suitably defined, be anathematised once more, as it was in earlier times? Continue reading ‘Against Usury’ »

Where is consciousness? It’s out there, apparently, not in here. There has been an interesting dialogue series going on between Riccardo Manzotti and Tim Parks in the NYRB (thanks to Tom Clark for drawing my attention to it) The separate articles are not particularly helpfully laid out or linked to each other: the series is

http://www.nybooks.com/daily/2016/11/21/challenge-of-defining-consciousness/
http://www.nybooks.com/daily/2016/12/08/color-of-consciousness/
http://www.nybooks.com/daily/2016/12/30/consciousness-does-information-smell/
http://www.nybooks.com/daily/2017/01/26/consciousness-the-ice-cream-problem/
http://www.nybooks.com/daily/2017/02/22/consciousness-am-i-the-apple/
http://www.nybooks.com/daily/2017/03/16/consciousness-mind-in-the-whirlwind/
http://www.nybooks.com/daily/2017/04/20/consciousness-dreaming-outside-our-heads/
http://www.nybooks.com/daily/2017/05/11/consciousness-the-body-and-us/
http://www.nybooks.com/daily/2017/06/17/consciousness-whos-at-the-wheel/

We discussed Manzotti’s views back in 2006, when with Honderich and Tonneau he represented a new wave of externalism. His version seemed to me perhaps the clearest and most attractive back then (though I think he’s mistaken). He continues to put some good arguments.

In the first part, Manzotti says consciousness is awareness, experience. It is somewhat mysterious – we mustn’t take for granted any view about a movie playing in our head or the like – and it doesn’t feature in the scientific account. All the events and processes described by science could, it seems, go on without conscious experience occurring.

He is scathing, however, about the view that consciousness is therefore special (surely something that science doesn’t account for can reasonably be seen as special?), and he suggests the word “mental” is a kind of conceptual dustbin for anything we can’t accommodate otherwise. He and Parks describe the majority of views as internalist, dedicated to the view that one way or another neural activity just is consciousness. Many neural correlates of consciousness have been spotted, says Manzotti, but correlates ain’t the thing itself.

In the second part he tackles colour, one of the strongest cards in the internalist hand. It looks to us as if things just have colour as a simple property, but in fact the science of colour tells us it’s very far from being that simple. For one thing how we perceive a colour depends strongly on what other colours are adjacent; Manzotti demonstrates this with a graphic where areas with the same RGB values appear either blue or green. Examples like this make it very tempting to conclude that colour is constructed in the brain, but Manzotti boldly suggests that if science and ordinary understanding are at odds, so much the worse for science. Maybe we ought to accept that those colours really are different, and be damned to RGB values.

The third dialogue attacks the metaphor of a computer often applied to the brain, and rejects talk of information processing. Information is not a physical thing, says Manzotti, and to speak of it as though it were a visible fluid passing through the brain risks dualism; something Tononi, with his theory of integrated information, accepts; he agrees that his ideas about information having two aspects point that way.

So what’s a better answer? Manzotti traces externalist ideas back to Aristotle, but focuses on the more ideas of affordances and enactivism. An affordance is roughly a possibility offered to us by an object; a hammer offers us the possibility of hitting nails. This idea of bashing things does not need to be represented in the head, because it is out there in the form of the hammer. Enactivism develops a more general idea of perception as action, but runs into difficulties in some cases such as that of dreams, where we seem to have experience without action; or consider that licking a strawberry or a chocolate ice cream is the same action but yields very different experience.

To set out his own view, Manzotti introduces the ‘metaphysical switchboard’: one switch toggles whether subject and object are separate, the other whether the subject is physical or not. If they’re separate, and we choose to make the subject non-physical, we get something like Cartesian dualism, with all the problems that entails. If we select ‘physical’ then we get the view of modern science; and that too seems to be failing. If subject and object are neither separate nor physical, we get Berkleyan idealism; my perceptions actually constitute reality. The only option that works is to say that subject and object are identical, but physical; so when I see an apple, my experience of it is identical with the apple itself. Parks, rightly I think, says that most people will find this bonkers at first sight. But after all, the apple is the only thing that has apple-like qualities! There’s no appliness in my brain or in my actions.

This raises many problems. My experience of the apple changes according to conditions, yet the apple itself doesn’t change. Oh no? says Manzotti, why not? You’re just clinging to the subject/object distinction; let it go and there’s no problem. OK, but if my experience of the apple is identical with the apple, and so is yours, then our experiences must be identical. In fact, since subject and object are the same, we must also be identical!

The answer here is curious. Manzotti points out that the physical quality of velocity is relative to other things; you may be travelling at one speed relative to me but a different one compared to that train going by. In fact, he says, all physical qualities are relative, so the apple is an apple experience relative to one animal (me) and at the same time relative to another in a different way. I don’t think this ingenious manoeuvre ultimately works; it seems Manzotti is introducing an intermediate entity of the kind he was trying to dispel; we now have an apple-experience relative to me which is different from the one relative to you. What binds these and makes them experiences of the same apple? If we say nothing, we fall back into idealism; if it’s the real physical apple, then we’re more or less back with the traditional framework, just differently labelled.

What about dreams and hallucinations? Manzotti holds that they are always made up out of real things we have previously experienced. Hey, he says, if we just invent things and colour is made in the head, how come we never dream new colours? He argues that there is always an interval between cause and effect when we experience things; given that, why shouldn’t real things from long ago be the causes of dreams?

And the self, that other element in the traditional picture? It’s made up of all the experiences, all the things experienced, that are relative to us; all physical, if a little scattered and dare I say metaphysically unusual; a massive conjunction bound together by… nothing in particular? Of course the body is central, and for certain feelings, or for when we’re in a dark, silent room, it may be especially salient. But it’s not the whole thing, and still less is the brain.

In the latest dialogue, Manzotti and Parks consider free will. For Manzotti, having said that you are the sum of your experiences, it is straightforward to say that your decisions are made by the subset of those experiences that are causally active; nothing that contradicts determinist physics, but a reasonable sense in which we can say your act belonged to you. To me this is a relatively appealing outlook.

Overall? Well, I like the way externalism seeks to get rid of all the problems with mediation that lead many people to think we never experience the world, only our own impressions of it. Manzotti’s version is particularly coherent and intelligible. I’m not sure his clever relativity finally works though. I agree that experience isn’t strictly in the brain, but I don’t think it’s in the apple either; to talk about its physical location is just a mistake. The processes that give rise to experience certainly have a location, but in itself it just doesn’t have that kind of property.

Another strange side light on free will. Some of the most-discussed findings in the field are Libet’s celebrated research which found that Readiness Potentials (RPs) in the brain showed when a decision to move had been made, significantly before the subject was aware of having decided. Libet himself thought this was problematic for free will, but that we could still have ‘Free Won’t’ – we could still change our minds after the RP had appeared and veto the planned movement.

A new paper (discussed here by Jerry Coyne) follows up on this, and seems to show that while we do have something like this veto facility, there is a time limit on that too, and beyond a certain point the planned move will be made regardless.

The actual experiment was in three phases. Subjects were given a light and a pedal and set up with equipment to detect RPs in their brain. They were told to press the pedal at a time of their choosing when the light was green, but not when it had turned red. The first run merely trained the equipment to detect RPs, with the light turning red randomly. In the second phase, the light turned red when an RP was detected, so that the subjects were in effect being asked to veto their own decision to press. In the third phase, they were told that their decisions were being predicted and they were asked to try to be unpredictable.

Detection of RPs actually took longer in some instances than others. It turned out that where the RP was picked up early enough, subjects could exercise the veto; but once the move was 200ms or less away, it was impossible to stop.

What does this prove, beyond the bare facts of the results? Perhaps not much. The conditions of the experiment are very strange and do not resemble everyday decision-making very much at all. It was always an odd feature of Libet’s research that subjects were asked to get ready to move but choose the time capriciously according to whim; not a mental exercise that comes up very often in real life. In the new research, subjects further have to stop when the light is red; they don’t, you notice, choose to veto their move, but merely respond to a pre-set signal. Whether this deserves to be called free won’t is debatable; it isn’t a free decision making process. How could it be, anyway; how could it be that deciding to do something takes significantly longer than deciding not to do the same thing? Is it that decisions to move are preceded by an RP, but other second-order decisions about those decisions are not? We seem to be heading into a maze of complications if we go that way and substantially reducing the significance of Libet’s results.

Of course, if we don’t think that Libet’s results dethrone free will in the first place, we need not be very worried. My own view is that we need to distinguish between making a conscious decision and becoming aware of having made the decision. Some would argue that that second-order awareness is essential to the nature of conscious thought, but I don’t think so. For me Libet’s original research showed only that deciding and knowing you’ve decided are distinct, and the latter naturally follows after the former. So assuming that, like me, you think it’s fine to regard the results of certain physical processes as ‘free’ in a useful sense, free will remains untouched. If you were always a sceptic then of course Libet never worried you anyway, and nor will the new research.

Could the Universe be conscious? This might seem like one of those Interesting Questions To Which The Answer Is ‘No’ that so often provide arresting headlines in the popular press. Since the Universe contains everything, what would it be conscious of? What would it think about? Thinking about itself – thinking about any real thing – would be bizarre, analogous to us thinking about the activity of the  neurons that were doing the thinking. But I suppose it could think about imaginary stuff. Perhaps even the cosmos can dream; perhaps it thinks it’s Cleopatra or Napoleon.

Actually, so far as I can see no-one is actually suggesting the Universe as a whole, as an entity, is conscious. Instead this highly original paper by Gregory L. Matloff starts with panpsychism, a belief that there is some sort of universal field of proto-consciousness permeating the cosmos. That is a not unpopular outlook these days. What’s startling is Matloff’s suggestion that some stars might be able to do roughly what our brains are supposed by panpsychists to do; recruit the field and use it to generate their own consciousness, exerting some degree of voluntary control over their own movements.

He relies for evidence on a phenomenon called Parenago’s discontinuity; cooler, less massive stars seem to circle the galaxy a bit faster than the others. Dismissing a couple of rival explanations, he suggests that these cooler stars might be the ones capable of hosting consciousness, and might be capable of shooting jets from their interior in a consistent direction so as to exert an influence over their own motion. This might be a testable hypothesis, bringing panpsychism in from the debatable realms of philosophy to the rigorous science of astrophysics (unkind people might suggest that the latter field is actually about as speculative as the former; I couldn’t possibly comment).

In discussing panpsychism it is good to draw a distinction between types of consciousness. There is a certain practical decision-making capacity in human consciousness that is relatively well rooted in science in several ways. We can see roughly how it emerged from biological evolution and why it is useful, and we have at least some idea of how neurons might do it, together with a lot of evidence that in fact, they do do it.  Then there is the much mistier business of subjective experience, what being conscious is actually like. We know little about that and it raises severe problems. I think it would be true to claim that most panpsychists think the kind of awareness that suffuses the world is of the latter kind; it is a dim general awareness, not a capacity to make snappy decisions. It is, in my view, one of the big disadvantages of panpsychism that it does not help much with explaining the practical, working kind of consciousness and in fact arguably leaves us with more to account for  than we had on our plate to start with.

Anyway, if Matloff’s theory is to be plausible, he needs to explain how stars could possibly build the decision-making kind of consciousness, and how the universal field would help. To his credit he recognises this – stars surely don’t have neurons – and offers at least some hints about how it might work. If I’ve got it right, the suggestion is that the universal field of consciousness might be identified with vacuum fluctuation pressures, which on the one hand might influence the molecules present in regions of the cooler stars under consideration, and on the other have effects within neurons more or less on Penrose/Hameroff lines. This is at best an outline, and raises immediate and difficult questions; why would vacuum fluctuation have anything to do with subjective experience? If a bunch of molecules in cool suns is enough for conscious volition, why doesn’t the sea have a mind of its own? And so on. For me the deadliest questions are the simplest. If cool stars have conscious control of their movements, why are they all using it the same way – to speed up their circulation a bit? You’d think if they were conscious they would be steering around in different ways according to their own choices. Then again, why would they choose to do anything? As animals we need consciousness to help us pursue food, shelter, reproduction, and so on. Why would stars care which way they went?

I want to be fair to Matloff, because we shouldn’t mock ideas merely for being unconventional. But I see one awful possibility looming. His theory somewhat recalls medieval ideas about angels moving the stars in perfect harmony. They acted in a co-ordinated way because although the angels had wills of their own, they subjected them to God’s. Now, why are the cool stars apparently all using their wills in a similarly co-ordinated way? Are they bound together through the vacuum fluctuations; have we finally found out there the physical manifestation of God? Please, please, nobody go in that direction!

Arnold TrehubRegular readers will be sad to hear that Arnold Trehub,  a fairly regular contributor to discussion on Conscious Entities, died in April.

Consciousness: it’s all theology in the end. Or philosophy, if the distinction matters. Beyond a certain point it is, at any rate; when we start asking about the nature of consciousness, about what it really is. That, in bald summary, seems to be the view of Robert A. Burton in a resigned Nautilus piece. Burton is a neurologist, and he describes how Wilder Penfield first provided really hard and detailed evidence of close, specific correlations between neural and mental events, by showing that direct stimulation of neurons could evoke all kinds of memories, feelings, and other vivid mental experiences. Yet even Penfield, at the close of his career did not think the mind, the spirit of man, could yet be fully explained by science, or would be any time soon; in the end we each had to adopt personal assumptions or beliefs about that.

That is more or less the conclusion Burton has come to. Of course he understands the great achievements of neuroscience, but in the end the Hard Problem, which he seems to interpret rather widely, defeats analysis and remains a matter for theology or philosophy, in which we merely adopt the outlook most congenial to us. You may feel that’s a little dismissive in relation to both fields, but I suppose we see his point. It will be no surprise that Burton dislikes Dennett’s sceptical outlook and dismissal of religion. He accuses him of falling into a certain circularity: Dennett claims consciousness is an illusion, but illusions, after all, require consciousness.

We can’t, of course, dismiss the issues of consciousness and selfhood as completely irrelevant to normal life; they bear directly on such matters as personal responsibility, moral rights, and who should be kept alive. But I think Burton could well argue that the way we deal with these issues in practice tends to vindicate his outlook; what we often see when these things are debated is a clash between differing sets of assumptions rather than a skilful forensic duel of competing reasoning.

Another line of argument that would tend to support Burton is the one that says worries about consciousness are largely confined to modern Western culture. I don’t know enough for my own opinion to be worth anything, but I’ve been told that in classical Indian and Chinese thought the issue of consciousness just never really arises, although both traditions have long and complex philosophical traditions. Indeed, much the same could be said about Ancient Greek philosophy, I think; there’s a good deal of philosophy of mind, but consciousness as we know it just doesn’t really present itself as a puzzle. Socrates never professed ignorance about what consciousness was.

A common view would be that it’s only after Descartes that the issue as we know it starts to take shape, because of his dualism; a distinction between body and spirit that certainly has its roots in the theological (and philosophical) traditions of Western Christianity. I myself would argue that the modern topic of consciousness didn’t really take shape until Turing raised the real possibility of digital computers; consciousness was recruited to play the role of the thing computers haven’t got, and our views on it have been shaped by that perspective over the last fifty years in particular. I’ve argued before that although Locke gives what might be the first recognisable version of the Hard Problem, with an inverted spectrum thought experiment, he actually doesn’t care about it much and only mentions it as a secondary argument about matters that to him, seemed more important.

I think it is true in some respects that as William James said, consciousness is the last remnant of the vanishing soul. Certainly, when people deny the reality of the self, it often seems to me that their main purpose is to deny the reality of the soul. But I still believe that Burton’s view cedes too much to relativism – as I think Fodor once said, I hate relativism. We got into this business – even the theologians – because we wanted the truth, and we’re not going to be fobbed off with that stuff! Scientists may become impatient when no agreed answer is forthcoming after a couple of centuries, but I cling to the idea that there is a truth if the matter about personhood, freedom, and consciousness. I recognise that there is in this, ironically, a tinge of an act of faith, but I don’t care.

Unfortunately, as always things are probably more complicated than that. Could freedom of the will, say, be a culturally relative matter? All my instincts say no, but if people don’t believe themselves to be free, doesn’t that in fact impose some limits on how free they really are? If I absolutely do not believe I’m able to touch the sacred statue, then although the inability may be purely psychological, couldn’t it be real? It seems there are at least some fuzzy edges. Could I abolish myself by ceasing to believe in my own existence? In a way you could say that is in caricature form what Buddhists believe (though they think that correctly understood, my existence was a delusion anyway). That’s too much for me, and not only because of the sort of circularity mentioned above; I think it’s much too pessimistic to give up on a good objective accounts of agency and selfhood.

There may never be a single clear answer that commands universal agreement on these issues, but then there has never been, and never will be, complete agreement about the correct form of human government; but we have surely come on a bit in the last thousand years. To abandon hope of similar gradual consensual progress on consciousness might be to neglect our civic duty. It follows that by reading and thinking about this, you are performing a humane and important task. Well done!

[Next week I return from a long holiday; I’ll leave you to judge whether I make more or less sense.]

A whole set of interesting articles from IEEE Spectrum explore the question of whether AI can and should copy the human brain more. Of course, so-called neural networks were originally inspired by the way the brain works, but they represent a drastic simplification of a partial understanding. In fact, they are so unlike real neurons it’s really rather remarkable that they turn out to perform useful processes at all. Karlheinz Meier provides a useful review of the development of neuromorphic computing, up to contemporary chips with impressive performance.

Jeff Hawkins suggests the brain is better in three ways. First, it learns by rewiring; by growing new synapses. This confers three benefits: learning that is fast, incremental, and continuous. The brain does not need to do lengthy retraining to learn new things. Remarkably, he says a single neurons can do substantial pieces of pattern recognition and acquire ‘knowledge’ of several patterns without them interfering with each other.

The second way in which the brain is better is that it uses sparse distributed representations; a particular idea such as ‘cat’ can be represented by a large number of neurons, with only a small percentage needing to be active at any one time. This makes the system robust in respect of noise and damage, but because some of the ‘cat’ neurons may play roles in the representation of other animals and other entities, it also makes it quick and efficient at recognising similarities and dealing with vague ideas (an animal in the bush which may or may not be a cat).

The third thing the brain does better, according to Hawkins, is sensorimotor integration. He makes the interesting claim that the brain effectively does this all over, as part of basic ordinary activity, not as a specialised central function. Instead of one big 3D model of the world, we have what amounts to little ones everywhere. This is interesting partly because it is, prima facie, so implausible. Doing your modelling a hundred or a million times over is going to use up a lot of energy and ‘processing power’, and it raises the obvious risk of inconsistency between models. But Hawkin’s says he has a detailed theory of how it works and you’d have to be bold to dismiss his claim.

There are several other articles, all worth a look. Actually there are several different reasons we might want to imitate the brain. We might want computers that can interface with humans better because, in part, they work in similar ways. We might want to understand the brain better and be able to test our understanding; an ability that might have real benefits for treating brain disease and injury, and to some degree make up for the ethical limitations on the experiments we can perform on humans. The main focus here, though, is on learning how to do those things the brain does so well, but which still cannot yet be done efficiently, or in some cases at all, by computers.

As a strategy, copying the brain has several drawbacks. First, we still don’t understand the brain well enough. Things have moved on greatly in recent years, but in some ways that just shows how limited our understanding was to begin with. There’s a significant danger that by imitating the brain without understanding, we end up reproducing features that are functionally irrelevant; features the brain has for chance evolutionary reasons. Do we need a brain divided into two halves, as those of vertebrates generally are, or is that unimportant? Second, one thing we do know is that the brain is extraordinarily complex and finely structured. We are never going to reproduce all that in full detail – but perhaps it doesn’t matter; we’ve never replicated the exquisite engineering of feather technology either, but it didn’t stop us doing flight or understanding birds.

I think the challenge of understanding the brain is unique, but trying to copy it is probably an increasingly productive strategy.