dialogueTom has written a nice dialogue on the subject of qualia: it’s here.

Could we in fact learn useful lessons from talking to a robot which lacked qualia?

Perhaps not; one view would be that since the robot’s mind presumably works in the same way as ours, it would have similar qualia: or would think it did. We know that David Chalmers’ zombie twin talked and philosophised about its qualia in exactly the same way as the original.

It depends on what you mean by qualia, of course. Some people conceive of qualia as psychological items that add extra significance or force to experience; or as flags that draw attention to something of potential interest. Those play a distinct role in decision making and have an influence on behaviour. If robots were really to behave like us, they would have to have some functional analogue of that kind of qualia, and so we might indeed find that talking to them on the subject was really no better or worse than talking to our fellow human beings.

But those are not real qualia, because they are fully naturalised and effable things, measurable parts of the physical world. Whether you are experiencing the same blue quale as me would, if these flags or intensifiers were qualia, be an entirely measurable and objective question, capable of a clear answer. Real, philosophically interesting qualia are far more slippery than that.

So we might expect that a robot would reproduce the functional, a-consciousness parts of our mind, and leave the phenomenal, p-consciousness ones out. Like Tom’s robot they would presumably be puzzled by references to subjective experience. Perhaps, then, there might be no point in talking to them about it because they would be constitutionally incapable of shedding any light on it. they could tell us what the zombie life is like, but don’t we sort of know that already? They could play the kind of part in a dialogue that Socrates’ easily-bamboozled interlocutors always seemed to do, but that’s about it, presumably?

Or perhaps they would be able to show us, by providing a contrasting example, how and why it is that we come to have these qualia? There’s something distinctly odd about the way qualia are apparently untethered from physical cause and effect, yet only appear in human beings with their complex brains.  Or could it be that they’re everywhere and it’s not that only we have them, it’s more that we’re the only entities that talk about them (or about anything)?

Perhaps talking to a robot would convince us in the end that in fact, we don’t have qualia either: that they are just a confused delusion. One scarier possibility though, is that robots would understand them all too well.

“Oh,” they might say, “Yes, of course we have those. But scanning through the literature it seems to us you humans only have a very limited appreciation of the qualic field. You experience simple local point qualia, but you have no perception of higher-order qualia; the qualia of the surface or the solid, or the complex manifold that seems so evident to us. Gosh, it must be awful…”

Gladraeli zapThis paper from  Chawke and Kanai reports unexpected effects on subjects’ political views, brought about by stimulation of the dorsolateral prefrontal cortex (DLPFC). It seems to make people more conservative.

The research set out to build on earlier studies. Those seemed to suggest that the DLPFC had a role in flagging up conflicts; noting for us where the evidence was suggesting our views might need to be changed. Generally people stick to a particular outlook (the researchers suggest that avoidance of cognitive dissonance and similar stresses is one reason) but every now and then a piece of evidence comes along that suggest we really have to do a little bit of reshaping, and the DLPFC helps with that unwelcome process.

If that theory is right. then gingering up the DLPFC ought to make people readier to change their existing views. To test this, the authors set up arrangements to deliver random trans-cranial noise stimulation bilaterally to the relevant areas. They tested subjects’ political views beforehand; showed them a party political broadcast, and then checked to see whether the subjects’ views had in fact changed.

This was at Sussex, so the political framework was a British one of Labour versus Conservative. The expectation was that stimulating the DLPFC would make the subjects more receptive to persuasion and so more inclined to adjust their views slightly in response to what they were seeing; so Labour-inclined subjects would move to the right while conservative-inclined ones moved to the left.

Briefly, that isn’t what happened: instead there was a small but significant general shift to the right. Why could that be? To be honest it’s impossible to say, but hypothetically we might suppose that the DLPFC is not, after all, responsible for helping us change our view in the fare of contrary evidence, but simply a sceptical or disbelieving module that allows us to doubt or discard political opinions. Arguably – and I hope I’m not venturing into controversial territory – right wing views tend to correspond with general doubt about political projects and a feeling that things are best left alone; we could say that the fewer politics you have the more you tend to be on the right?

Whether that’s true or not it seems alarming that stimulating the brain directly with random noise can affect your political views; it suggests an unscrupulous government could change the result of an election by irradiating the poling stations.

What did it feel like for the subjects? Nothing, it seems; the experimenters were careful to ensure that control subjects got the same kind of experience although their DLPFC was left alone. Subjects were apparently unaware of any change in their views (and we’re only talking shifts on a small scale, not Damascene conversions to the opposite party).

Perhaps in the end it’s not quite as alarming as it seems. Suppose we played our subjects bursts of ordinary random acoustic noise? That would be rather irritating; it might make them overall a little angrier and less patient – might that not also have a small temporary effect on their voting pattern…?

Interesting exchange about Eric Schwitzgebel’s view that we have special obligations to robots…

jennifer2Are ideas conscious at all? Neuroscience of Consciousness is a promising new journal from OUP introduced by the editor Anil Seth here. It has an interesting opinion piece from David Kemmerer which asks – are we ever aware of concepts, or is conscious experience restricted to sensory, motor and affective states?

On the face of it a rather strange question? According to Kemmerer there are basically two positions. The ‘liberal’ one says yes, we can be aware of concepts in pretty much the same kind of way we’re aware of anything. Just as there is a subjective experience when we see a red rose, there is another kind of subjective experience when we simply think of the concept of roses. There are qualia that relate to concepts just as there are qualia that relate to colours or smells, and there is something it is like to think of an idea. Kemmerer identifies an august history for this kind of thinking stretching back to Descartes.

The conservative position denies that concepts enter our awareness. While our behaviour may be influenced by concepts, they actually operate below the level of conscious experience. While we may have the strong impression that we are aware of concepts, this is really a mistake based on awareness of the relevant words, symbols, or images. The intellectual tradition behind this line of thought is apparently a little less stellar – Kemmerer can only push it back as far as Wundt – but it is the view he leans towards himself.

So far so good – an interesting philosophical/psychological issue. What’s special here is that in line with the new journal’s orientation Kemmerer is concerned with the neurological implications of the debate and looks for empirical evidence. This is an unexpected but surely commendable project.

To do it he addresses three particular theories. Representing the liberal side he looks at Global Neural Workspace Theory (GNWT) as set out by Dehaene, and Tononi’s Integrated information Theory (IIT)’ on the conservative side he picks the Attended Intermediate-Level Representation Theory (AIRT) of Prinz. He finds that none of the three is fully in harmony with the neurological evidence, but contends that the conservative view has distinct advantages.

Dehaene points to research that identified specific neurons in a subject’s anterior temporal lobes that fire when the subject is shown a picture of, say, Jennifer Aniston (mentioned on CE – rather vaguely). The same neuron fires when shown photographs, drawing, or other images, and even when the subject is reporting seeing a picture of Aniston. Surely then, the neuron in some sense represents not an image but the concept of Jennifer Aniston?  against theconservative view Kemmerer argues that while a concept may be at work, imagery is always present in the conscious mind; indeed, he contends,  you cannot think of ‘Anistonicity’ in itself without a particular image of Aniston coming to mind. Secondly he quotes further research which shows that deterioration of this portion of the brain impairs our ability to recognise, but not to see, faces. This, he contends, is good evidence that while these neurons are indeed dealing with general concepts at some level, they are contributing nothing to conscious awareness, reinforcing the idea that concepts operate outside awareness. According to Tononi we can be conscious of the idea of a triangle, but how can we think of a triangle without supposing it to be equilateral, isosceles, or scalene?

Turning to the conservative view, Kemmerer notes that AIRT has awareness at a middle level, between the jumble of impressions delivered by raw sensory input on the one hand, and the invariant concepts which appear at the high level. Conscious information must be accessible but need not always be accessed.  It is implemented as gamma vector waves. This is apparently easier to square with the empirical data than the global workspace, which implies that conscious attention would involve a shift into the processing system in the lateral prefrontal cortex where there is access to working memory – something not actually observed in practice. Unfortunately although the AIRT has a good deal of data on its side the observed gamma responses don’t in fact line up with reported experience in the way you would expect if it’s correct.

I think the discussion is slightly hampered by the way Kemmerer uses ‘awareness’ and ‘consciousness’ as synonyms. I’d be tempted to reserve awareness for what he is talking about, and allow that concepts could enter consciousness without our being (subjectively) aware of them. I do think there’s a third possibility being overlooked in his discussion – that concepts are indeed in our easy-problem consciousness while lacking the hard-problem qualia that go with phenomenal experience. Kemmerer alludes to this possibility at one point when he raises Ned Block’s distinction between access and phenomenal  (a- and p-consciousness), but doesn’t make much of it.

Whatever you think of Kemmerer’s ambivalent;y conservative conclusion, I think the way the paper seeks to create a bridge between the philosophical and the neurological is really welcome and, to a degree, surprisingly successful. If the new journal is going to give us more like that it will definitely be a publication to look forward to.


baby with mirrorAre babies solipsists? Ali, Spence and Bremner at Goldsmiths say their recent research suggests that they are “tactile solipsists”.

To be honest that seems a little bit of a stretch from the actual research. In essence this tested how good babies were at identifying the location of a tactile stimulus. The researchers spent their time tickling babies and seeing whether the babies looked in the direction of the tickle or not (the life of science is tough, but somebody’s got to do it). Perhaps surprisingly, perhaps not, the babies were in general pretty good at this. In fact the youngest ones were less likely to be confused by crossing their legs before tickling their feet, something that reduced the older ones’ success rate to chance levels, and in fact impairs the performance of adults too.

The reason for this is taken to be that long experience leads us to assume a stimulus to our right hand will match an event in the right visual field, and so on. After the correlations are well established the brain basically stops bothering to check and is then liable to be confused when the right hand (or foot) is actually on the left, or vice versa.

This reminded me a bit of something I noticed with my own daughters: when they were very small their fingers all worked independently and were often splayed out, with single fingers moving quite independently; but in due course they seemed to learn that not much is achieved in most circumstances by using the four digits separately and that you might as well use them in concert by default to help with grasping, as most of us mostly do except when using a keyboard.

Very young babies haven’t had time to learn any of this and so are not confused by laterally inconsistent messages. The Goldsmiths’ team read this as meaning that they are in essence just aware of their own bodies, not aware of them in relation to the world. It could be so, but I’m not sure it’s the only interpretation. Perhaps it’s just not that complex.

There are other reasons to think that babies are sort of solipsistic. There’s some suggestive evidence these days that babies are conscious of their surroundings earlier than we once thought, but until recently it’s been thought that self-awareness didn’t dawn until around fifteen months, with younger babies unaware of any separation between themselves and the world. This was partly based on the popular mirror test, where a mark is covertly put on the subject’s face. When shown themselves in a mirror, some touch the mark; this is taken to show awareness that the reflection is them, and hence a clear sign of self awareness. The test has been used to indicate that such self-awareness is mainly a human thing, though also present in some apes, elephants, and so on.

The interpretation of the mirror test always seemed dubious to me. Failure to touch your own face might not mean you’ve failed to recognise yourself; contrariwise, you might think the reflection was someone else but still be motivated to check your own face to see whether you too had a mark. If people out there are getting marked, wouldn’t you want to check?

Sure enough about five years ago evidence emerged that the mirror test is in fact very much affected by cultural factors and that many human beings outside the western world react quite differently to a mirror. It’s not all that surprising that if you’ve seen people use mirrors to put on make-up (or shave) regularly your reactions to one might be affected.  If we were to rely on the mirror test, it seems many Kenyan six-year-olds would be deemed unaware of their own existence.

Of course the question is in one sense absurd: to be any kind of solipsist is, strictly speaking, to hold an explicit philosophical position which requires quite advanced linguistic and conceptual apparatus which small infants certainly don’t have. For the question to be meaningful we have to have a clear view about what kinds of beliefs babies can be said to hold. I don’t doubt that hold some inexplicit ones, and that we go on holding beliefs in the same way alongside others at many different levels. If we reach out to catch a ball we can in some sense be said to hold the belief that it is following a certain path, although we may not have entertained any conscious thoughts on the matter. At the other end of the spectrum, where we solemnly swear to tell the truth, the whole truth, and nothing but the truth, the belief has been formulated with careful specificity and we have (one hopes) deliberated inwardly at the most abstract levels of thought about the meaning of the oath. The complex and many-layered ways in which we can believe things have yet to be adequately clarified I think; a huge project and since introspection is apparently the only way to tackle it, a daunting one.

For me the only certain moral to be drawn from all the baby-tickling is one which philosophers might recognise: the process of learning about the world is at root a matter of entering into worse and grander confusions.

BermudezThe more you know about the science of the mind, the less appealing our common sense ideas seem. Ideas about belief and desire motivating action just don’t seem to match up in any way with what you see going on. So, at least, says Jose Luis Bermudez in Arguing for Eliminativism (freely available on Academia, but you might need to sign in). Bermudez sympathises with Paul Churchland’s wish to sweep the whole business of common sense psychology away; but he wants to reshape Churchland’s attack, standing down the ‘official’ arguments and bringing forward others taken from within Churchland’s own writing on the subject.

Bermudez sketches the complex landscape with admirable clarity. He notes Boghossian has argued that eliminativism of this kind is incoherent: but Boghossian construed eliminativism as an attack on all forms of content. Bermudez has no desire to be so radical and champions a purely psychological eliminativism.

If something’s wrong with common sense psychology it could either be that what it says is false, or that what it says is not even capable of being judged true or false. In the latter case it could, for example, be that all common sense talk of mental states is nothing more than a complex system of reflexive self-expression like grunts and moans. Bermudez doesn’t think it’s like that: the propositions of common sense psychology are meaningful, they just happen to be erroneous.

It therefore falls to the eliminativist to show what the errors are.  Bermudez has a two-horned strategy: first, we can argue that as a matter of fact, we don’t rely on common sense understanding as much as we think. Second, we can look for ways to show that the kind of propositional content implied by common sense views is just incompatible with the mechanism that actually underlie human action and behaviour as revealed by scientific investigation.

There are, in fact, two different ways of construing common sense psychology. One is that our common sense understanding is itself a kind of theory of mind: this is the ‘theory theory’ line. To disprove this we might try to bring out what the common sense theory is and then attack it. The other way of construing common sense is that we just use our own minds as a model: we put ourselves in the other person’s shoes and imagine how we should think and react. To combat this one we should need a slightly different approach; but it seems Bermudez’s strategy is good either way.

I think the first horn of the attack works better than the second – but not perfectly. Bermudez rightly says it is very generally accepted that to negotiate complex social interactions we need to ascribe beliefs and desires to other people and draw conclusions about their likely behaviour. It ain’t necessarily so. Bermudez quotes the Prisoner’s Dilemma, the much-cited example where we have been arrested: if we betray our partner in crime we’ll get better terms whatever the other one does. We don’t, Bermudez points out, need to have any particular beliefs about what the other person will do: we can work out the strategy from just knowing the circumstances.

More widely, Bermudez contends, we often don’t really need to know what an individual has in mind.  If we know that person is a butcher, or a waiter, then the relevant social interaction can be managed without any hypothesising about beliefs and desires. (In fact we can imagine a robot butcher/waiter who would certainly lack any beliefs or desires but could execute the transactions perfectly well.)

That is fine as far as it goes, but it isn’t that hard to think of examples where the ascription of beliefs seems relevant. In particular, the interpretation of speech, especially the reading of Gricean implicatures, seems to rely on it. Sometimes it also seems that the ascription of emotional states is highly relevant, or hypotheses about what another person knows, something Bermudez doesn’t address.

It’s interesting to reflect on what a contrast this is with Dennett. I think of Dennett and Churchland as loosely allied: both sceptics about qualia, both friendly to materialist, reductive thinking. Yet here Bermudez presents a Churchlandish view which holds that ascriptions of purpose are largely useless in dealing with human interaction, while Dennett’s Intentional Stance of course requires that they are extremely useful.

Bermudez doesn’t think this kind of argument is sufficient, anyway, hence the second horn in which he tries to sketch a case for saying that common sense and neurons don’t fit well. The real problem here for Bermudez is that we don’t really know how neurons represent things. He makes a case for kinds of representation other than the sort of propositional representation he thinks is required by the standard common sense view (ie, we believe or desire that  xxx…). It’s true that a mess of neurons doesn’t look much like a set of well-formed formulae, but to cut to the chase I think Bermudez is pursuing a vain quest. We know that neurons can deal with ascriptions of propositional belief and desire (otherwise how would we even be able to think and talk about them) so it’s not going to be possible to rule them out neurologically. Bermudez presents some avenues that could be followed, but even he doesn’t seem to think the case can be clinched as matters stand.

I wonder if he needs to? It seems to me that the case for elimination does not rest on proving the common sense concepts false, only on their being redundant. If Bermudez can show that all ascriptions of belief and desire can for practical purposes be cashed out or replaced by cognition about the circumstances and game-theoretic considerations, then simple parsimony will get him the elimination he seeks.

He would still, of course, be left with explaining why the human mind adopts a false theory about itself instead of the true one: but we know some ways of explaining that – for example, ahem, through the Blindness of the Brain  (ie that we’re trapped within our limitations and work with the poor but adequate heuristics gifted to us, or perhaps foisted on us, bu evolution).

four hard problemsNot one Hard Problem, but four. Jonathan Dorsey, in the latest JCS, says that the problem is conceived in several different ways and we really ought to sort out which we’re talking about.

The four conceptions, rewritten a bit by me for what I hope is clarity, are that the problem is to explain why phenomenal consciousness:

  1. …arises from the physical (using only a physicalist ontology)
  2. …arises from the physical, using any ontology
  3. …arises at all (presumably from the non-physical)
  4. …arises at all or cannot be explained.

I don’t really see these as different conceptions of the problem (which simply seems to be the explanation of phenomenal consciousness), but rather as different conceptions of what the expected answer is to be. That may be nit-picking; useful distinctions in any case.  Dorsey offers some pros and cons for each of the four.

In favour of number one, it’s the most tightly focused. It also sits well in context, because Dorsey sees the problem as emerging under the dominance of physics. The third advantage is that it confines the problem to physicalism and so makes life easy for non-physicalists (not sure why this is held to be one of the pros, exactly). Against; well, maybe that context is dominating too much? Also the physicalist line fails to acknowledge Chalmers’ own naturalist but non-physicalist solution (it fails to acknowledge lots of other potential solutions too, so I’m not quite clear why Chalmers gets this special status at this point – though of course he did play a key role in defining the Hard Problem).

Number two’s pros and cons are mostly weaker versions of number one’s. It too is relatively well-focused. It does not identify the Hard Problem with the Explanatory Gap (that could be a con rather than a pro in my humble opinion). It fits fairly well in context and makes life relatively easy for traditional non-physicalists. It may yield a bit too much to the context of physics and it may be too narrow.

Number three has the advantage of focusing on the basics, and Dorsey thinks it gives a nice clear line between Hard and Easy problems. It provides a unifying approach – but it neglects the physical, which has always been central to discussion.

Number four provides a fully extended version of the problem, and makes sense of the literature by bringing in eliminativism. In a similar way it gives no-one a free pass; everyone has to address it. However, in doing so it may go beyond the bounds of a single problem and extend the issues to a wide swathe of philosophy of mind.

Dorsey thinks the answer is somewhere between 2 and 3; I’m more inclined to think it’s most likely between 1 and 2.

Let’s put aside the view that phenomenal consciousness cannot be explained. There are good arguments for that conclusion, but to me they amount to opting out of a game which is by no means clearly lost. So the problem is to explain how phenomenal consciousness arises. The explanation surely has to fit into some ontology, because we need to know what kind of thing phenomenal experience really is. My view is that the high-level differences between ontologies actually matter less than people have traditionally thought. Look at it this way: if we need an ontology, then it had better be comprehensive and consistent.  Given those two properties, we might as well call it a monism. because it encompasses everything and provides one view, even if that one view is complex.

So we have a monism: but it might be materialism, idealism, spiritualism, neutral monism, or many others. Does it matter? The details do matter, but if we’ve got one substance it seems to me it doesn’t matter what label we apply. Given that the material world and its ontology is the one we have by far the best knowledge of, we might as well call it materialism. It might turn out that materialism is not what we think, and it might explain all sorts of things we didn’t expect it to deal with, but I can’t see any compelling reason to call our single monist ontology anything else.

So what are the details, and what ontology have we really got? I’m aware that most regulars here are pretty radical materialists, with some exceptions (hat tip to cognicious); people who have some difficulty with the idea that the cosmos has any contents besides physical objects; uncomfortable with the idea of ideas (unless they are merely conjunctions of physical objects) and even with the belief that we can think about anything that isn’t robustly physical (so much for mathematics…). That’s not my view. I’m also a long way from from being a Platonist, but I do think the world includes non-physical entities, and that that doesn’t contradict a reasonable materialism. The world just is complex and in certain respects irreducible; probably because it’s real. Reduction, maybe, is essentially a technique that applies to ideas and theories: if we can come up with a simpler version that does the job,  then the simpler version is to be adopted. But it’s a mistake to think that that kind of reduction applies to reality itself; the universe is not obliged to conform to a flat ontology, and it does not. At the end of the day – and I say this with the greatest regret – the apprehension of reality is not purely a matter of finding the simplest possible description.

I believe the somewhat roomier kind of materialism I tend to espouse corresponds generally with what we should recognise as the common sense view, and this yields what might be another conception of the Hard Problem…

  1. …arises from the physical (in a way consistent with common sense)


Henry MarkramThe brain is not a gland. But Henry Markram seems to be in danger of simulating one – a kind of electric gland.

What am I on about? The Blue Brain Project has published details of its most ambitious simulation yet; a computer model of a tiny sliver of rat brain. That may not sound exciting on the face of it, but the level of detail is unprecedented…

The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm3 containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses.

The results are good. Without parameter tuning – that is, without artificial adjustments to make it work the way it should work – the digital simulation produces patterns of electrical activity that resemble those of real slivers of rat brain. The paper is accessible here. It seems a significant achievement and certainly attracted a lot of generally positive attention – but there are some significant problems. The first is that the methodological issues which were always evident remain unresolved. The second is certain major weaknesses in the simulation itself. The third is that as a result of these weaknesses the simulation implicitly commits Markram to some odd claims, ones he probably doesn’t want to make.

First, the methodology. The simulation is claimed as a success, but how do we know? If we’re simulating a heart, then it’s fairly clear it needs to pump blood. If we’re simulating a gland, it needs to secrete certain substances. The brain? It’s a little more complicated. Markram seems implicitly to take the view that the role of brain tissue is to generate certain kinds of electrical activity; not particular functional outputs, just generic activity of certain kinds.

One danger with that is a kind of circularity. Markram decides the brain works a certain way, he builds a simulation that works like that, and triumphantly shows us that his simulation does indeed work that way. Vindication! It could be that he is simply ignoring the most important things about neural tissue, things that he ought to be discovering. Instead he might just be cycling round in confirmation of his own initial expectations. One of the big dangers of the Blue Brain project is that it might entrench existing prejudices about how the brain works and stop a whole generation from thinking more imaginatively about new ideas.

The Blue simulation produces certain patterns of electrical activity that look like those of real rat brain tissue – but only in general terms. Are the actual details important? After all, a string of random characters with certain formal constraints looks just like meaningful text, or valid code, or useful stretches of DNA, but is in fact useless gibberish. Putting in constraints which structure the random text a little and provide a degree of realism is a relatively minor task; getting output that’s meaningful is the challenge. It looks awfully likely that the Blue simulation has done the former rather than the latter, and to be brutal that’s not very interesting. At worst it could be like simulating an automobile whose engine noise is beautifully realistic but never moves. We might well think that the project is falling into the trap I mentioned last time: mistaking information about the brain for the information in the brain.

Now it could be that actually the simulation is working better than that; perhaps it isn’t as generic as it seems, perhaps this particular bit of rat brain works somewhat generically anyway; or perhaps somehow in situ the tissue trains or conditions itself, saving the project most of the really difficult work. The final answer to the objections above might be if the simulation could be plugged back into a living rat brain and the rat behaviour shown to continue properly. If we could do that it would sidestep the difficult questions about how the brain operates; if the rat behaves normally, then even though we still don’t know why, we know we’re doing it right. In practice it doesn’t seem very likely that that would work, however: the brain is surely about producing specific control outputs, not about glandularly secreting generic electrical activity.

A second set of issues relates to the limitations of the simulation. Several of the significant factors I mentioned above have been left out; notably there are no astrocytes and no neurotransmitters. The latter is a particularly surprising omission because Markram himself has in the past done significant work on trying to clarify this area in the past. The fact that the project has chosen to showcase a simulation without them must give rise to a suspicion that its ambitions are being curtailed by the daunting reality; that there might even be a dawning realisation internally that what has been bitten off here really is far beyond chewing and the need to deliver has to trump realism. That would be a worrying straw in the wond so far as the project’s future is concerned.

In addition, while the simulation reproduces a large number of different types of neuron, the actual wiring has been determined by an algorithm. A lot depends on this: if the algorithm generates useful and meaningful connections then it is itself a master-work beside which the actual simulation is trivial. If not, then we’re back with the question of whether generic kinds of connection are really good enough. They may produce convincing generic activity, and maybe that’s even good enough for certain areas of rat brain, but we can be pretty sure it isn’t good enough for brain activity generally.

Harking back for a moment to methodology, there’s still something odd in any case about trying to simulate a process you don’t understand. Any simulation reproduces certain features of the original and leaves others out. The choice is normally determined by a thesis about how and why the thing works: that thesis allows you to say which features are functionally necessary and which are irrelevant. Your simulation only models the essential features and its success therefore confirms your view about what really matters and how it all operates. Markram, though, is not starting with an explicit thesis. One consequence is that it is hard to tell whether his simulation is a success or not, because he didn’t tell us clearly enough in advance what it was he was trying to make happen. What we can do is read off the implicit assumptions that the project cannot help embodying in its simulation. To hold up the simulation as a success is to make the implicit claim that the brain is basically an electrical network machine whose essential function is to generate certain general types of neural activity. It implicitly affirms that the features left out of the simulation – notably the vast array and complex role of neural transmitters and receptors – are essentially irrelevant. That is a remarkable claim, quite unlikely, and I don’t think it’s one Markram really wants to make. But if he doesn’t, consistency obliges him to downplay the current simulation rather than foreground it.

To be fair, the simulation is not exactly being held up as a success in those terms. Markram describes it as a first draft. That’s fair enough up to a point (except that you don’t publish first drafts), but if our first step towards a machine that wrote novels was one that generated the Library of Babel (every possible combination of alphabetic characters plus spaces) we might doubt whether we were going in the right direction. The Blue Brain project in some ways embodies technological impatience; let’s get on and build it and worry about the theory later. The charge is that as a result the project is spending its time simulating the wrong features and distracting attention from the more difficult task of getting a real theoretical understanding; that it is making an electric gland instead of a brain.

brain copyInformation about the brain is not the same as information in the brain; yet in discussions of mind uploading, brain simulation, and mind reading the two are quite often conflated or confused. Equivocating between the two makes the task seem far easier than it really is. Scanners of various kinds exist, after all, and have greatly improved in recent years; technology usually goes on improving over time. If all we need is to get a really good scan of the brain in order to understand it, then surely it can only be a matter of time? Alas, information about the brain is an inadequate proxy for the information in the brain that we really need.

We’re often asked to imagine a scanner that can read off the structural details of a brain down to any required level of detail. Usually we’re to assume this can be done non-destructively, or even without disturbing the content and operation of the brain at all. These are of course unlikely feats, not just beyond existing technology but rather hard to imagine even on the most optimistic view of future developments. Sometimes the ready confidence that this miracle will one day be within our grasp is so poorly justified I find it tempting to think that the belief in the possibility of such magic scans is being buoyed up not by sanguine technological speculation but unconsciously by much older patterns of thinking; that the mind is located in breath, or airy spirits, or some other immaterial substance that can be sucked out of a body and replaced without physical consequences. Of course on the other side it’s perfectly true that lots of things once considered impossible are now routinely achieved.

But even allowing ourselves the most miraculous knowledge of the brain’s structure, so what? We could have an exquisite plan of the structure of a disk or a book without knowing what story it contained. Indeed, it would only take a small degree of inaccuracy or neglect in copying to leave us with a duplicate that strongly resembled the original but actually reproduced none of the information bearing elements; a disk with nothing on it, a book with random ink patterns.

Yeah but, the optimists say; the challenge may be huge, the level of detail required orders of magnitude beyond anything previously attempted, but if we copy something with enough fidelity the information just is going to come along with the copy necessarily. A perfect copy just has to include a perfect copy of the information. Granted, in the case of a book it’s not much use if we have the information but don’t know how to read it. The great thing about simulating a brain, though, is that we don’t even need to understand. We can just set it up and start it going. We may never know directly what the information in the brain was, but it’ll do its job; the mind will upload, the simulation will run.

In the case of mind reading the marvellous flexibility of the mind also offers us a chance to cheat by taking some measurable, controllable brain function and simply using it as a signalling device. It works, up to a point, but it isn’t clear why brain communication by such lashed-up indirect routes is any more telepathy than simply talking to someone; in both cases the actual information in the brain remains inaccessible except through a deliberate signalling procedure.

Now of course a book or a disk is in some important ways actually a far simpler challenge than a brain. The people who designed, made, and use the disk or the book take great care to ensure that a specific, readily readable set of properties encodes the information required in a regular, readable form. These are artefacts designed to carry information, as is a computer. The brain is not artefactual and does not need to be legible. There’s no need for a clear demarcation between information-bearing elements and the rest, and there’s no need for a standardised encoding or intelligible structures. There are, in fact many complex elements that might have a role in holding information.

Suppose we recalibrated our strategy and set out to scan just the information in the brain; what would we target? The first candidate these days is the connectome; the overall pattern of neural connections within the brain. There’s no doubt this kind of research is currently very lively and interesting – see for example this recent study. Current research remains pretty broad brush stuff and it’s not really clear how much detail will ever be manageable; but what if we could map the connections perfectly? How could we read off the content? It’s actually highly unlikely that all the information in the brain is encoded as properties of a network. The functional state of a neuron depends on many things, in particular the receptors and transmitters; the large known repertoire of these has greatly increased in recent years. We know that the brain does not operate simply through electrical transmission, with chemical controls from the endocrine system and elsewhere playing a large and subtle part. It’s not at all unlikely that astrocytes, the non-neural cells in the brain, have a significant role in modulating and even controlling its activity. It’s not at all unlikely, on the other hand, that ephaptic coupling or other small electrical induction effects have a significant role, too. And while myself I wouldn’t place any bets on exotic quantum physics being relevant, as some certainly believe, I think it would be very rash to assume that biology has no further tricks up its sleeve in the shape of important mechanisms we haven’t even noticed yet.

None of that can currently be ruled out of court as irrelevant. A computer has a specified way of working and if electrical interference starts changing the value of some bits in working memory you know it’s a bug, not a feature. In the brain, it could be either; the only way to judge is whether we like the results or not. There’s no reason why astrocyte states, say, can’t be key for one brain region or for one personality, and irrelevant for others, or even legitimate at one time and unhelpful interference at others. We just cannot know what to point our magic scanner at, and it may well be that the whole idea of information recorded in but distinct from a substrate just isn’t appropriate.

Yeah but again, total perfect copy? In principle if we get everything, we get everything, don’t we? The problem is that we can’t have everything. Copying, simulating, or transmitting necessarily involve transitions during which some features are unavoidably left out. Faith in the possibility of a good copy rests on the belief that we can identify a sufficient set of relevant features; so long as those are preserved, we’re good. We’re optimistic that one day we can do a job on the physical properties which is good enough. But that’s just information about the brain.

Pepper spiced upWe need to talk about sexbots.  It seems (according to the Daily Mail – via MLU) that buyers of the new Pepper robot pal are being asked to promise they will not sex it up the way some naughty people have been doing; putting a picture of breasts on its touch screen and making poor Pepper tremble erotically when the screen is touched.

Just in time, some academics have launched the Campaign against Sex Robots. We’ve talked once or twice about the ethics of killbots; from thanatos we move inevitably to eros and the ethics of sexbots. Details of some of the thinking behind the campaign are set out in this paper by Kathleen Richardson of De Montfort University.

In principle there are several reasons we might think that sex with robots was morally dubious. We can put aside, for now at least, any consideration of whether it harms the robots emotionally or in any other way, though we might need to return to that eventually.

It might be that sex with robots harms the human participant directly. It could be argued that the whole business is simply demeaning and undignified, for example – though dignified sex is pretty difficult to pull off at the best of times. It might be that the human partner’s emotional nature is coarsened and denied the chance to develop, or that their social life is impaired by their spending every evening with the machine. The key problem put forward seems to be that by engaging in an inherently human activity with a mere machine, the line is blurred and the human being imports into their human relationship behaviour only appropriate to robots: that in short, they are encouraged to treat human beings like machines. This hypothetical process resembles the way some young men these days are disparagingly described as “porn-educated” because their expectations of sex and a sexual relationship have been shaped and formed exclusively by what we used to call blue movies.

It might also be that the ease and apparent blamelessness of robot sex will act as a kind of gateway to worse behaviour. It’s suggested that there will be “child” sexbots; apparently harmless in themselves but smoothing the path to paedophilia. This kind of argument parallels the ones about apparently harmless child porn that consists entirely of drawings or computer graphics, and so arguably harms no children.

On the other side, it can be argued that sexbots might provide a harmless, risk-free outlet for urges that would otherwise inconveniently be pressed on human beings. Perhaps the line won’t really be blurred at all: people will readily continue to distinguish between robots and people, or perhaps the drift will all be the other way: no humans being treated as machines, but one or two machines being treated with a fondness and sentiment they don’t really merit? A lot of people personalise their cars or their computers and it’s hard to see that much harm comes of it.

Richardson draws a parallel with prostitution. That, she argues, is an asymmetrical relationship at odds with human equality, in which the prostitute is treated as an object: robot sex extends and worsens that relationship in all respects. Surely it’s bound to be a malign influence? There seem to be some problematic aspects to her case. A lot of human relationships are asymmetrical; so long as they are genuinely consensual most people don’t seem bothered by that. It’s not clear that prostitutes are always simply treated as objects: in fact they are notoriously required to fake the emotions of a normal sexual relationship, at least temporarily, in most cases (we could argue about whether that actually makes the relationship better or worse). Nor is prostitution simple or simply evil: it comes in many forms from many prostitutes who are atrociously trafficked, blackmailed and beaten, through those who regard it as basically another service job, through to some few idealistic practitioners who work in a genuine therapeutic environment. I’m far from being an advocate of the profession in any form, but there are some complexities and even if we accept the debatable analogy it doesn’t provide us with a simple, one-size-fits-all answer.

I do recognise the danger that the line between human and machine might possibly be blurred. It’s a legitimate concern, but my instinct says that people will actually be fairly good at drawing the distinction and if anything robot sex will tend not to be thought of either as like sex with humans or sex with machines: it’ll mainly be thought of as sex with robots, and in fact that’s where a large part of the appeal will lie.

It’s a bit odd in a way that the line-blurring argument should be brought forward particularly in a sexual context. You’d think that if confusion were to arise it would be far more likely and much more dangerous in the case of chat-bots or other machines whose typical interactions were relatively intellectual. No-one, I think, has asked for Siri to be banned.

My soggy conclusion is that things are far more complex than the campaign takes them to be, and a blanket ban is not really an appropriate response.