Maybe there’s a better strategy on consciousness? An early draft paper by David Chalmers suggests we turn from the Hard Problem (explaining why there is ‘something it is like’ to experience things) and address the Meta-Problem of why people think there is a Hard Problem; why we find the explanation of phenomenal experience problematic. While paper does make clear broadly what Chalmers’ own views are, it primarily seeks to map the territory, and does so in a way that is very useful.

Why would we decide to focus on the Meta-Problem? For sceptics, who don’t believe in phenomenal experience or think that the apparent problems about it stem from mistakes and delusions, it’s a natural piece of tidying up. In fact, for sceptics why people think there’s a problem may well be the only thing that really needs explaining or is capable of explanation. But Chalmers is not a sceptic. Although he acknowledges the merits of the broad sceptical case about phenomenal consciousness which Keith Frankish has recently championed under the label of illusionism, he believes it is indeed real and problematic. He believes, however, that illuminating the Meta-Problem through a programme of thoughtful and empirical research might well help solve the Hard Problem itself, and is a matter of interest well beyond sceptical circles.

To put my cards on the table, I think he is over-optimistic, and seems to take too much comfort from the fact that there have to be physical and functional explanations for everything. It follows from that that there must indeed at least be physical and functional explanations for our reports of experience, our reports of the problem, and our dispositions to speak of phenomenal experience, qualia, etc. But it does not follow that there must be adequate and satisfying explanations.

Certainly physical and functional explanations alone would not be good enough to banish our worries about phenomenal experience. They would not make the itch go away. In fact, I would argue that they are not even adequate for issues to do with the ‘Easy Problem’, roughly the question of how consciousness allows us to produce intelligent and well-directed behaviour. We usually look for higher-level explanations even there; notably explanations with an element of teleology – ones that tell us what things are for or what they are supposed to do. Such explanations can normally be cashed out safely in non-teleological terms, such as strictly-worded evolutionary accounts; but that does not mean they are dispensable or not needed in order for us to understand properly.

How much more challenging things are when we come to Hard Problem issues, where a claim that they lie beyond physics is of the essence. Chalmer’s optimism is encapsulated in a sentence when he says…

Presumably there is at least a very close tie between the mechanisms that generate phenomenal reports and consciousness itself.

There’s your problem. Illusionists can be content with explanations that never touch on phenomenal consciousness because they don’t think it exists, but no explanation that does not connect with it will satisfy qualophiles. But how can you connect with a phenomenon explanatorily without diagnosing its nature? It really seems that for believers, we have to solve the Hard Problem first (or at least, simultaneously) because believers are constrained to say that the appearance of a problem arises from a real problem.

Logically, that is not quite the case; we could say that our dispositions to talk about phenomenal experience arise from merely material causes, but just happen to be truthful about a second world of phenomenal experience, or are truthful in light of a Leibnizian pre-established harmony. Some qualophiles are similarly prepared to say that their utterances about qualia are not caused by qualia, so that position might seem appealing in some quarters. To me the harmonised second world seems hopelessly redundant, and that is why something like illusionism is, at the end of the day, the only game in town.

I should make it clear that Chalmers by no means neglects the question of what sort of explanation will do; in fact he provides a rich and characteristically thorough discussion. It’s more that in my opinion, he just doesn’t know when he’s beaten, which to be fair may be an outlook essential to the conduct of philosophy.

I say that something like illusionism seems to be the only game in town, though I don’t quite call myself an illusionist. There’s a presentational difficulty for me because I think the reality of experience, in an appropriate sense, is the nub of the matter. But you could situate my view as the form of illusionism which says the appearance of ineffable phenomenal experience arises from the mistaken assumption that particular real experiences should be within the explanatory scope of general physical theory.

I won’t attempt to summarise the whole of Chalmers’ discussion, which is detailed and illuminating; although I think he is doomed to disappointment, the project he proposes might well yield good new insights; it’s often been the case that false philosophical positions were more fecund than true ones.

There’s a fundamental ontological difference between people and programs which means that uploading a mind into a machine is quite impossible.

I thought I’d get my view in first (hey, it’s my blog), but I was inspired to do so by Beth Elderkin’s compilation of expert views in Gizmodo, inspired in turn by Netflix’s series Altered Carbon. The question of uploading is often discussed in terms of a hypothetical Star Trek style scanner and the puzzling thought experiments it enables. What if instead of producing a duplicate of me far away, the scanner produced two duplicates? What if my original body was not destroyed – which is me? But let’s cut to the chase; digital data and a real person belong to different ontological realms. Digital data is a set of numbers, and so has a kind of eternal Platonic essence. A person is a messy entity bound into time and space. The former subsist, the latter exist; you cannot turn one into the other, any more than an integer can become a biscuit and get eaten.

Or look at it this way; a digitisation is a description. Descriptions, however good, do not contain the thing described (which is why the science of colour vision does not contain the colour red, as Mary found in the famous thought experiment).

OK, well, that that, see you next time… Oh, sorry, yes, the experts…

Actually there are many good points in the expert views. Susan Schneider makes three main ones. First, we don’t know what features of a human brain are essential, so we cannot be sure we are reproducing them; quantum physics imposes some limits on how precisely we can copy th3 brain anyway. Second, the person left behind by a non-destructive scanner surely is still you, so a destructive scan amounts to death. Third, we don’t know whether AI consciousness is possible at all. So no dice.

Anders Sandberg makes the philosophical point that it’s debatable whether a scanner transfers identity. He tends to agree with Parfit that there is no truth of the matter about it. He also makes the technical point that scanning a brain in sufficient detail is an impossibly vast and challenging task, well beyond current technology at least. While a digital insert controlling a biological body seems feasible in theory, reshaping a biological brain is probably out of the question. He goes on to consider ethical objections to uploading, which don’t convince him.

Randal Koene thinks uploading probably is possible. Human consciousness, according to the evidence, arises from brain processes; if we reproduce the processes, we reproduce the mind. The way forward may be through brain prostheses that replace damaged sections of brain, which might lead ultimately to a full replacement. He thinks we must pursue the possibility of uploading in order to escape from the ecological niche in which we may otherwise be trapped (I think humans have other potential ways around that problem).

Miguel A. L. Nicolelis dismisses the idea. Our minds are not digital at all, he says, and depend on information embedded in the brain tissue that cannot be extracted by digital means.

I’m basically with Nicolelis, I fear.


The self is real – it just, like Walt Whitman, contains multitudes. That’s the case made by Serife Tekin in Aeon. She begins by rightly pointing out the current popularity of disbelief in the self. She traces antirealist thinking right back to Hume, who said he was never able to spot his self by introspection; all he ever came up with was a bundle of perceptions. Interestingly she picks out Dennett as a contemporary example of antirealism, but she could readily have pointed to several others who think the self is an illusion or misinterpretation, perhaps stemming from our cognitive limitations, or from the reflexivity that arises when we turn our mind on itself.

Tekin by contrast suggests the self is both real and open to proper scientific investigation. It’s just that it has many forms; it is multitudinous. Borrowing from Neisser, she suggests five main dimensions of the self…

…the ecological self, or the embodied self in the physical world, which perceives and interacts with the physical environment; the interpersonal self, or the self embedded in the social world, which constitutes and is constituted by intersubjective relationships with others; the temporally extended self, or the self in time, which is grounded in memories of the past and anticipation of the future; the private self which is exposed to experiences available only to the first person and not to others; and finally the conceptual self, which (accurately or falsely) represents the self to the self by drawing on the properties or characteristics of not only the person but also the social and cultural context to which she belongs.

I don’t think these five types are meant to exhaust the variety of the self, which actually comes in a huge variety of shifting shapes. Nor are we meant to think that there is no basic unity; the five work together to provide an overall coherence of agency, though not without retaining some inner tensions and contradictions (nothing too strange psychologically in the idea that we may entertain contradictory thoughts and feelings in certain contexts.

The fivefold structure pays off because Tekin can give a separate account of how each can be addressed scientifically. The ecological self is easily observable, for example; fir the interpersonal self we need to pay attention to social aspects, but no great problem there. The most difficult seems likely to be the private self; Tekin seems to think we can get to that simply by interviewing people about ‘what it is like’, which perhaps underrates the problems.

Overall, it’s a sensible and appealing position. The curious thing is how close it seems to the kind of position taken by Dennett, here quoted as an example of antirealism. In fact, Dennett’s ideas are more nuanced than some. He doesn’t believe in a continuous, coherent self like a soul, but he is content to liken the self to a centre of gravity; not a real physical entity as such but a useful and harmless construction. As the author of the ‘multiple drafts’ theory of consciousness, I think he might rather like Tekin’s multitudinousness; and her approach to the private self looks quite like his ‘heterophenomenology’ in which we give up trying to study ineffable inner experience, but happily give consideration to what people tell us about ineffable inner experience.

This raises the attractive possibility that sceptics and believers might end up constructing effectively identical models of the self, the only difference being that one side regards the model as an eliminative reduction while the other sees it as simply analysis. I find that a strangely cheering prospect.


Could phrenology be true after all? The nineteenth century theory that bumps on the head correlated with personality traits has always been dismissed as absurd by most, right from its earliest days. Perhaps because of this, it has rarely received serious attention; but now O. Parker Jones, F. Alfaro-Almagro and S. Jbabdi have given it a properly rigorous examination, exploiting the resources of the UK Biobank.

Phrenology was based on ideas that are reasonable in themselves. First it supposes that different regions of the brain have different functions, following a consistent pattern in all humans. Second, it holds that if one of those functions plays an especially large part in a subject’s mental life, the relevant area will be larger, and tend to swell outwards. Third, these swellings in the brain will be reflected in the shape of the skull, in a way which can be detected by careful measurement, or through palpation by a skilled phrenologist. These propositions may be wrong, but cannot be dismissed as inherently nutty.

Phrenology used a list of ‘faculties’ such as ‘amativeness’ and ‘philoprogenitiveness’, which mapped on to areas of the brain; the charts and models which resulted have a strange appeal and are still often used ornamentally by people who have no belief or even interest in the theory they embody. There were 27 faculties in all; some of them look a little odd (number 5 is ‘Murder, carnivorousness (Destructiveness)’ which even vegetarians might accept is a rather heterogeneous grouping), but most make prima facie sense. In order to test the theory, the researchers tried to match up the faculties with ‘lifestyle’ measures available in the Biobank data. It has to be said that some of these matches are better than others. Linking ‘amativeness’ with number of sexual partners, or the number faculty with mathematicians, seems reasonable: linking combativeness with solicitors, and self-esteem with bankers is possibly more debatable. The proposed linkage of cautiousness with frequency of alcohol intake must sure;y be intended as a negative correlation. The researchers confess that their choices here reflect their own faculty of mirth to some degree. In general, it seems the researchers did not end up using the job-related lifestyle measures, because there just weren’t, for example, enough poets in the data.

In some cases there were strong correlations across faculties/lifestyle measures. Mostly these seem to have been unsurprising – writers, it turns out, tend to be good with words. More amusingly, the strongest correlation was between the ability to generate words and the number of sexual partners. If you want many partners, it seems you should be hanging out at poetry slams, not the gym.

It will perhaps come as no surprise that the researchers found no correlation between the faculties and skull curvature. This is enough, with appropriate caveats, to show that ‘orthodox’ phrenology is incorrect. We sort of knew that already; as the paper points out, studies based on the deficits caused by brain lesions in particular areas long ago showed that while there was indeed localisation of function within the brain, the functions were not where phrenology said they ought to be. However, it leaves open the possibility that a reformed phrenology, with a different set of faculties, might still be sound.

This remaining possibility is removed by the second part of the study, which confirmed that there is no correlation between the bumpiness of the brain and the bumpiness of the skull. You cannot tell anything about the brain inside just by feeling the head that contains it. This too isn’t really a huge surprise. If the shape of the skull were determined by the shape of the brain, we might expect our heads to show visible signs of the different lobes that make up the large-scale structure of the brain; we might expect that the skull would be skewed the way the brain is, with one hemisphere edging ahead of the other.

However, as the researchers rightly say, it is good to have these things tested with proper scientific rigour. They also suggest that their techniques might have some value in certain real medical conditions where the brain is at risk of being constrained by the skull. I suppose it’s useful, too, to be reminded how popular a theory with no real scientific basis can become (not that we’re in much danger of forgetting that these days…).


We need a richer idea of consciousness and of our minds: Jenny Judge suggests that our experience of music in particular points to a need for an expanded conception of the mind to include visceral apprehension.  Many who have championed the idea of an extended mind that isn’t just identifiable with the brain alone will be nodding along, perhaps rhythmically.

For those of us who are a little entrenched in the limited idea of the mind as a matter of the brain doing computations on representations, Judge cunningly offers a couple of pieces of bait in the form of solid cognitive insights her wider perspective can yield. One is that we perceive and respond to rhythm in important ways, even without perceiving it at times. The timing of our utterances can actually change the way they are interpreted and carry significant information. A delay in giving assent can qualify the level of agreement, and apparently this is even culturally variable; the Japanese expect a snappy response, while in Denmark you can take your time without the risk of seeming grudging.

A second insight concerns entrainment, the tendency of connected vibrating systems to synchronise rhythm. Judge presents plausible evidence that a form of entrainment plays an important role in governing the activity of our minds and even of the neurons in the brain (so it ought not to be ignored, even by those who are initially happy with a narrower conception of cognition.

Judge discusses the challenges in perceiving music, with its complexity and its inherent sequentiality. The way we perceive time and motion is complex (we could add that sometimes the visual system just seems to label some things as ‘moving’ even though they are not perceived as changing place). But, wisely I think, she does not quite make the further case that the phenomenology of musical experience is peculiarly intractable. It’s true that great music can cause us to experience emotional and cognitive states that we could never otherwise explore, and it would certainly be possible to base an argument of incredulity on this. Just as Leibniz professed disbelief about the possibility of a mill-type mechanism, however complex, producing awareness, or Brentano declared that intentionality was something else altogether, we could claim that musical experience just is not the kind of thing that physical processes could create. Such arguments are powerfully persuasive, but without some further explanation as to exactly why consciousness cannot arise from physical processing, they don’t prove anything.

It would be hard to disagree, however, with the suggestion that our phenomenological experience really needs to be properly charted in a way that dies justice to its complexity. I’d have a go myself if I had any idea of how to set about it.

You may already have seen Jochen’s essay Four Verses from the Daodejingan entry in this year’s FQXi competition. It’s a thought-provoking piece, so here are a few of the ones it provoked in me. In general I think it features a mix of alarming and sound reasoning which leads to a true yet perplexing conclusion.

In brief Jochen suggests that we apprehend the world only through models; in fact our minds deal only with these models. Modelling and computation are in essence the same. However, the connection between model and world is non-computable (or we face an infinite regress). The connection is therefore opaque to our minds and inexpressible. Why not, then, identify it with that other inexpressible element of cognition, qualia? So qualia turn out to be the things that incomprehensibly link our mental models with the real world. When Mary sees red for the first time, she does learn a new, non-physical fact, namely what the connection between her mental model and real red is. (I’d have to say that as something she can’t understand or express, it’s a weird kind of knowledge, but so be it.)

I think to talk of modelling so generally is misleading, though Jochen’s definition is itself broadly framed, which means I can’t say he’s wrong. In his terms it seems anything that uses data about the structure and causal functioning of X to make predictions about its behaviour would be a model. If you look at it that way, it’s true that virtually all our cognition is modelling. But to me a model leads us to think of something more comprehensive and enduring than we ought. In my mind at least, it conjures up a sort of model village or homunculus, when what’s really going on is something more fragmentary and ephemeral, with the brain lashing up a ‘model’ of my going to the shop for bread just now and then discarding it in favour of something different. I’d argue that we can’t have comprehensive all-purpose models of ourselves (or anything) because models only ever model features relevant to a particular purpose or set of circumstances. If a model reproduced all my features it would in fact be me (by Leibniz’ Law) and anyway the list of potentially relevant features goes on for ever.

The other thing I don’t like about liberal use of modelling is that it makes us vulnerable to the view that we only experience the model, not the world. People have often thought things like this, but to me it’s almost like the idea we never see distant planets, only telescope lenses.

Could qualia be the connection between model and world? It’s a clever idea, one of those that turn out on reflection to not be vulnerable to many of the counterarguments that first spring to mind. My main problem is that it doesn’t seem right phenomenologically. Arguments from one’s own perception of phenomenology are inherently weak, but then we are sort of relying on phenomenology for our belief (if any) in qualia in the first place. A red quale doesn’t seem like a connection, more like a property of the red thing; I’m not clear why or how I would be aware of this connection at all.

However, I think Jochen’s final conclusion is both poignant and broadly true. He suggests that models can have fundamental aspects, the ones that define their essential functions – but the world is not under a similar obligation. It follows that there are no fundamentals about the world as a whole.

I think that’s very likely true, and I’d make a very similar kind of argument in terms of explanation. There are no comprehensive explanations. Take a carrot. I can explain its nutritional and culinary properties, its biology, its metaphorical use as a motivator, its supposed status as the favourite foodstuff of rabbits, and lots of other aspects; but there is no total explanation that will account for every property I can come up with; in the end there is only the carrot. A demand for an explanation of the entire world is automatically a demand for just the kind of total explanation that cannot exist.

Although I believe this, I find it hard to accept; it leaves my mind with an unscratched itch. If we can’t explain the world, how can we assimilate it? Through contemplation? Perhaps that would have been what Laozi would have advocated. More likely he would have told us to get on with ordinary life. Stop thinking, and end your problems!



It’s not just that we don’t know how anaesthetics work – we don’t even know for sure that they work. Joshua Rothman’s review of a new book on the subject by Kate Cole-Adams quotes poignant stories of people on the operating table who may have been aware of what was going on. In some cases the chance remarks of medical staff seem to have worked almost like post-hypnotic suggestions: so perhaps all surgeons should loudly say that the patient is going to recover and feel better than ever, with new energy and confidence.

How is it that after all this time, we don’t know how anaesthetics work? As the piece aptly remarks, it’s about losing consciousness, and since we don’t know clearly what that is or how we come to have it, it’s no surprise that its suspension is also hard to understand. To add to the confusion, it seems that common anaesthetics paralyse plants, too. Surely it’s our nervous system anaesthetics mainly affect – but plants don’t even have a nervous system!

But come on, don’t we at least know that it really does work? Most of us have been through it, after all, and few have weird experiences; we just don’t feel the pain – or anything. The problem, as we’ve discussed before, is telling whether we don’t feel the pain, or whether we feel it but don’t remember it. This is an example of a philosophical problem that is far from being a purely academic matter.

It seems anaesthetics really do (at least) three different things. They paralyse the patient, making it easier to cut into them without adverse reactions, they remove conscious awareness or modulate it (it seems some drugs don’t stop you being aware of the pain, they just stop you caring about it somehow), and they stop the recording of memories, so you don’t recall the pain afterwards. Anaesthetists have a range of drugs to produce each of these effects. In many cases there is little doubt about their effectiveness. If a drug leaves you awake but feeling no pain, or if it simply leaves you with no memory, there’s not that much scope for argument. The problem arises when it comes to anaesthetics that are supposed to ‘knock you out’. The received wisdom is that they just blank out your awareness for a period, but as the review points out, there are some indications that instead they merely paralyse you and wipe your memory. The medical profession doesn’t have a good record of taking these issues very seriously; I’ve read that for years children were operated on after being given drugs that were known to do little more than paralyse them (hey, kids don’t feel pain, not really; next thing you’ll be telling me plants do…).

Actually, views about this are split; a considerable proportion of people take the view that if their memory is wiped, they don’t really care about having been in pain. It’s not a view I share (I’m an unashamed coward when it comes to pain), but it has some interesting implications. If we can make a painful operation OK by giving mnestics to remove all recollection, perhaps we should routinely do the same for victims of accidents. Or do doctors sometimes do that already…?

Tom Stafford reports on an interesting review of the psychology of conspiracy theories – the persistent belief that ‘they’ are working secretly to conceal the truth about the assassination of JFK or the moon landings, for example. The review suggests current research is better at explaining the forces that drive conspiracy theories than at examining their psychological consequences. It seems the theories are motivated by three needs; for understanding, for safety/control, and for a positive image of yourself and the groups you belong to. But in point of fact, they are not very good at meeting these needs and may even make the people who subscribe to them feel worse.

Stafford suggests we could see this as maladaptive coping. He criticises some aspects of the review, in particular the way it defines conspiracy theories rather loosely, so that it seems to include reasonable conspiracy beliefs. You’re not paranoid if they really are out to get you, after all.

Perhaps the most remarkable example of a genuine conspiracy is the way that around this time of year we all go to enormous lengths to convince our children that a fat old man is going to come down the chimney into their bedroom one night (a idea that terrifies a few of them, possibly the more rational ones). Kids who subscribed to the theory that parents, teachers and media were involved in a massive con would not be wrong, but would they be displaying early signs of a tendency to conspiracy theories? Is it rational, at a certain age, to believe in Santa?

So far as I recall, my own attitude back in the middle of the last century was neither exactly belief nor disbelief. I was well aware that people in department store grottos were proxies, merely dressed up as Father Christmas. I got as far as noting that the logistics of delivering presents to every child in the world in a single night were challenging, and vaguely hypothesised that the job was done by similar proxies, maybe one for each street. But I didn’t worry about it much. There were lots of things I didn’t fully understand at the time. I didn’t really know how department stores came to be full of stuff anyway – why worry about Santa’s grotto particularly? You could well say that my attitude to Santa back then was pretty much what my attitude to quantum physics is now. I don’t really understand it, and parts of it don’t seem to make any sense. But people I basically trust have got this for me, so I’m happy to take their word (just to be quite clear here, I am not suggesting that quantum physics is a massive conspiracy).

The matter of who you trust is, I think, at the root of the conspiracy theory thing generally. We all have to take a lot of things on trust from appropriate authorities. An essential and probably under-examined part of the education system is about teaching people which authorities to trust, and much of the academic system of peer review and publication, unsatisfactory as it is*, is about keeping authoritative sources identifiable and reliable. People who believe in conspiracy theories have flaws in their judgement about which authorities to accept.

Not that this is simple. Trusting authority is a tricky business which needs to be balanced with an ability to evaluate and critique even reliable authorities. People who have been thoroughly educated may be weak on this side, inclined to believe what they read and pay more attention to the manifesto and the statement of principles than what is actually happening. Uneducated people may be more inclined to use their own observation and reason on the basis of perceived personality. Sometimes this works better, an excellent reason why everyone should have the vote. They say that cab driver off the ‘seven up’ observed around the turn of the century that the folks in the City were having a big party; in ten or fifteen years, he said, we’ll be told it’s all gone wrong and the bill is down to us. You can’t say that’s a detailed prediction of the crash, and it sounds a little conspiracyish, but it’s a good deal better than the financial experts of the day managed.

Perhaps the Father Christmas Conspiracy is the way we help our children sharpen up their understanding of the need to balance proper acknowledgement of reliable authority with prudent, independent use of common sense.

Merry Christmas!

*I think we ought to set up a Universal Academy which publishes free access papers and a great Summa Scientia, citation in which would be the gold standard of sound and important research. It wouldn’t be cheap, but maybe if we could get some kind of EU/USA rivalry going we could get two Academies?

All I want for a Christmas is a new brain? There seems to have been quite a lot of discussion recently about the prospect of brain augmentation; adding in some extra computing power to the cognitive capacities we have already. Is this a good idea? I’m rather sceptical myself, but then I’m a bit of a Luddite in this area; I still don’t like the idea of controlling a computer with voice commands all that much.

Hasn’t evolution has already optimised the brain in certain important respects? I think there may be some truth in that, but It doesn’t look as if evolution has done a perfect job. There are certainly one or two things about the human nervous system that look as if they could easily be improved. Think of the way our neural wiring crosses over from right to left for no particular reason. You could argue that although that serves no purpose it doesn’t do any real harm either, but what about the way our retinas are wired up from the front instead of the back, creating an entirely unnecessary blind spot where the bundle of nerves actually enters the eye – a blind spot which our brain then stops us seeing, so we don’t even know it’s there?

Nobody is proposing to fix those issues, of course, but aren’t there some obvious respects in which our brains could be improved by adding in some extra computational ability? Could we be more intelligent, perhaps? I think the definition of intelligence is controversial, but I’d say that if we could enhance our ability to recognise complex patterns quickly (which might be a big part of it) that would definitely be a bonus. Whether a chip could deliver that seems debatable at present.

Couldn’t our memories be improved? Human memory appears to have remarkable capacity, but retaining and recalling just those bits of information we need has always been an issue. Perhaps related, we have that annoying inability to hold more than a handful of items in our minds at once, a limitation that makes it impossible for us to evaluate complex disjunctions and implications, so that we can’t mentally follow a lot of branching possibilities very far. It certainly seems that computer records are in some respects sharper, more accurate, and easier to access than the normal human system (whatever the normal human system actually is). It would be great to remember any text at will, for example, or exactly what happened on any given date within our lives. Being able to recall faces and names with complete accuracy would be very helpful to some of us.

On top of that, couldn’t we improve our capacity for logic so that we stop being stumped by those problems humans seem so bad at, like the Wason test? Or if nothing else, couldn’t we just have the ability to work out any arithmetic problem instantly and flawlessly, the way any computer can do?

The key point here, I think, is integration. On the one hand we have a set of cognitive abilities that the human brain delivers. On the other, we have a different set delivered by computers. Can they be seamlessly integrated? The ideal augmentation would mean that, for example, if I need to multiply two seven-digit numbers I ‘just see’ the answer, the way I can just see that 3+1 is 4. If, on the contrary, I need to do something like ask in my head ‘what is 6397107 multiplied by 8341977?’ and then receive the answer spoken in an internal voice or displayed in an imagined visual image, there isn’t much advantage over using a calculator. In a similar way, we want our augmented memory or other capacity to just inform our thoughts directly, not be a capacity we can call up like an external facility.

So is seamless integration possible? I don’t think it’s possible to say for certain, but we seem to have achieved almost nothing to date. Attempts to plug into the brain so far have relied on setting up artificial linkages. Perhaps we detect a set of neurons that reliably fire when we think about tennis; then we can ask the subject to think about tennis when they want to signal ‘yes’, and detect the resulting activity. It sort of works, and might be of value for ‘locked in’ patients who can’t communicate any other way, but it’s very slow and clumsy otherwise – I don’t think we know for sure whether it even works long-term or whether, for example, the tennis linkage gradually degrades.

What we really want to do is plug directly into consciousness, but we have little idea of how. The brain does not modularise its conscious activity to suit us, and it may be that the only places we can effectively communicate with it are the places where it draws data together for its existing input and output devices. We might be able to feed images into early visual processing or take output from nerves that control muscles, for example. But if we’re reduced to that, how much better is that level of integration going to be than simply using our hands and eyes anyway? We can do a lot with those natural built-in interfaces ; simple reading and writing may well be the greatest artificial augmentation the brain can ever get. So although there may be some cool devices coming along at some stage, I don’t think we can look for godlike augmented minds any time soon.

Incidentally, this problem of integration may be one good reason not to worry about robots taking over. If robots ever get human-style motivation, consciousness, and agency, the chances are that they will get them in broadly the way we get them. This suggests they will face the same integration problem that we do; seven-digit multiplication may be intrinsically no easier for them than it is for us. Yes, they will be able to access computers and use computation to help them, but you know, so can we. In fact we might be handier with that old keyboard than they are with their patched-together positronic brain-computer linkage.



Our conscious minds are driven by unconscious processes, much as it may seem otherwise, say David A. Oakley and Peter W. Halligan. A short version is here, though the full article is also admirably clear and readable.

To summarise very briefly, they suggest three distinct psychological elements are at work. The first, itself made up of various executive processes, is what we might call the invisible works; the various unconscious mechanisms that supply the content of what we generally consider conscious thought.  Introspection shows that conscious thoughts often seem to pop up out of nowhere, so we should be ready enough to agree that consciousness is not entirely self-sustaining. When we wake up we generally find that the stream of consciousness is already a going concern. The authors also mention, in support of their case, various experiments. Some of these were on hypnotised subjects, which you might feel detracts from their credibility in explaining normal thought processes. Other ‘priming’ effects have also taken a bit of a knock in the recent trouble over reproducibility. But I wouldn’t make heavy weather of these points; the general contention that the contents of consciousness are generated by unconscious processes (at least to a great extent) seems to me one that few would object to. How could it be otherwise? It would be most peculiar if consciousness were a closed loop, like some Ouroboros swallowing its own tail.

The second element is a continuously generated personal narrative. This is an essentially passive record of some of the content generated by the ‘invisible works’, conditioned by an impression of selfhood and agency. The narrative has evolutionary survival value because it allows the exchange of experience and the co-ordination of behaviour, and enables us to make good guesses at others’ plans – the faculty often called ‘theory of mind’.

At first glance I thought the authors, who are clearly out to denounce something as an epiphenomenon (a thing that is generated by the mind but has no influence on it), had this personal narrative as their target, but that isn’t quite the case. While they see the narrative as essentially the passive product of the invisible works, they clearly believe it has some important influences on our behaviour through the way it enables us to talk to others and take their thoughts into account. One point which seems to me to need more prominence here is our ability to reflexively  ‘talk to ourselves’ mentally and speculate about our own motives. I think the authors accept that this goes on, but some philosophers consider it a vital process, perhaps constitutive of consciousness, so I think they need to give it a substantial measure of attention. Indeed, it offers a potential explanation of free will and the impression of agency; it might be just the actions that flow from the reflexive process that we regard as our own free acts.

One question we might also ask is, why not identify the personal narrative as consciousness itself? It is what we remember of ourselves, after all. Alternatively, why not include the ‘invisible works’? These hidden processes fall outside consciousness because (I think) they are not accessible to introspection; but must all conscious processes be introspectable? There’s a distinction between first and second-order awareness (between knowing and knowing that we know) which offers some alternatives here.

It’s the third element that Oakley and Halligan really want to denounce; this is personal awareness, or what we might consider actual conscious experience. This, they say, is a kind of functionless emergent phenomenon. To ask its purpose is as futile as asking what a rainbow is for; it’s just a by-product of the way things work, an epiphenomenon. It has no evolutionary value, and resembles the whistle on a steam locomotive – powered by the machine but without influence over it (I’ve always thought that analogy short-changes whistles a bit, but never mind).

I suppose the main challenge here might be to ask why the authors think personal awareness is anything at all. It has no effects on mental processes, so any talk about it was not caused by the thing itself. Now we can talk about things that did not cause that talking; but those are typically abstract or imaginary entities. Given their broadly sceptical stance, should the authors be declaring that personal awareness is in fact not even an epiphenomenon, but a pure delusion?

I have my reservations about the structure suggested here, but it would be good to have clarity and, at the risk of damning with faint praise, this is certainly one of the more sensible proposals.