Eliminating Common Sense

BermudezThe more you know about the science of the mind, the less appealing our common sense ideas seem. Ideas about belief and desire motivating action just don’t seem to match up in any way with what you see going on. So, at least, says Jose Luis Bermudez in Arguing for Eliminativism (freely available on Academia, but you might need to sign in). Bermudez sympathises with Paul Churchland’s wish to sweep the whole business of common sense psychology away; but he wants to reshape Churchland’s attack, standing down the ‘official’ arguments and bringing forward others taken from within Churchland’s own writing on the subject.

Bermudez sketches the complex landscape with admirable clarity. He notes Boghossian has argued that eliminativism of this kind is incoherent: but Boghossian construed eliminativism as an attack on all forms of content. Bermudez has no desire to be so radical and champions a purely psychological eliminativism.

If something’s wrong with common sense psychology it could either be that what it says is false, or that what it says is not even capable of being judged true or false. In the latter case it could, for example, be that all common sense talk of mental states is nothing more than a complex system of reflexive self-expression like grunts and moans. Bermudez doesn’t think it’s like that: the propositions of common sense psychology are meaningful, they just happen to be erroneous.

It therefore falls to the eliminativist to show what the errors are.  Bermudez has a two-horned strategy: first, we can argue that as a matter of fact, we don’t rely on common sense understanding as much as we think. Second, we can look for ways to show that the kind of propositional content implied by common sense views is just incompatible with the mechanism that actually underlie human action and behaviour as revealed by scientific investigation.

There are, in fact, two different ways of construing common sense psychology. One is that our common sense understanding is itself a kind of theory of mind: this is the ‘theory theory’ line. To disprove this we might try to bring out what the common sense theory is and then attack it. The other way of construing common sense is that we just use our own minds as a model: we put ourselves in the other person’s shoes and imagine how we should think and react. To combat this one we should need a slightly different approach; but it seems Bermudez’s strategy is good either way.

I think the first horn of the attack works better than the second – but not perfectly. Bermudez rightly says it is very generally accepted that to negotiate complex social interactions we need to ascribe beliefs and desires to other people and draw conclusions about their likely behaviour. It ain’t necessarily so. Bermudez quotes the Prisoner’s Dilemma, the much-cited example where we have been arrested: if we betray our partner in crime we’ll get better terms whatever the other one does. We don’t, Bermudez points out, need to have any particular beliefs about what the other person will do: we can work out the strategy from just knowing the circumstances.

More widely, Bermudez contends, we often don’t really need to know what an individual has in mind.  If we know that person is a butcher, or a waiter, then the relevant social interaction can be managed without any hypothesising about beliefs and desires. (In fact we can imagine a robot butcher/waiter who would certainly lack any beliefs or desires but could execute the transactions perfectly well.)

That is fine as far as it goes, but it isn’t that hard to think of examples where the ascription of beliefs seems relevant. In particular, the interpretation of speech, especially the reading of Gricean implicatures, seems to rely on it. Sometimes it also seems that the ascription of emotional states is highly relevant, or hypotheses about what another person knows, something Bermudez doesn’t address.

It’s interesting to reflect on what a contrast this is with Dennett. I think of Dennett and Churchland as loosely allied: both sceptics about qualia, both friendly to materialist, reductive thinking. Yet here Bermudez presents a Churchlandish view which holds that ascriptions of purpose are largely useless in dealing with human interaction, while Dennett’s Intentional Stance of course requires that they are extremely useful.

Bermudez doesn’t think this kind of argument is sufficient, anyway, hence the second horn in which he tries to sketch a case for saying that common sense and neurons don’t fit well. The real problem here for Bermudez is that we don’t really know how neurons represent things. He makes a case for kinds of representation other than the sort of propositional representation he thinks is required by the standard common sense view (ie, we believe or desire that  xxx…). It’s true that a mess of neurons doesn’t look much like a set of well-formed formulae, but to cut to the chase I think Bermudez is pursuing a vain quest. We know that neurons can deal with ascriptions of propositional belief and desire (otherwise how would we even be able to think and talk about them) so it’s not going to be possible to rule them out neurologically. Bermudez presents some avenues that could be followed, but even he doesn’t seem to think the case can be clinched as matters stand.

I wonder if he needs to? It seems to me that the case for elimination does not rest on proving the common sense concepts false, only on their being redundant. If Bermudez can show that all ascriptions of belief and desire can for practical purposes be cashed out or replaced by cognition about the circumstances and game-theoretic considerations, then simple parsimony will get him the elimination he seeks.

He would still, of course, be left with explaining why the human mind adopts a false theory about itself instead of the true one: but we know some ways of explaining that – for example, ahem, through the Blindness of the Brain  (ie that we’re trapped within our limitations and work with the poor but adequate heuristics gifted to us, or perhaps foisted on us, bu evolution).

Four kinds of Hard

four hard problemsNot one Hard Problem, but four. Jonathan Dorsey, in the latest JCS, says that the problem is conceived in several different ways and we really ought to sort out which we’re talking about.

The four conceptions, rewritten a bit by me for what I hope is clarity, are that the problem is to explain why phenomenal consciousness:

  1. …arises from the physical (using only a physicalist ontology)
  2. …arises from the physical, using any ontology
  3. …arises at all (presumably from the non-physical)
  4. …arises at all or cannot be explained.

I don’t really see these as different conceptions of the problem (which simply seems to be the explanation of phenomenal consciousness), but rather as different conceptions of what the expected answer is to be. That may be nit-picking; useful distinctions in any case.  Dorsey offers some pros and cons for each of the four.

In favour of number one, it’s the most tightly focused. It also sits well in context, because Dorsey sees the problem as emerging under the dominance of physics. The third advantage is that it confines the problem to physicalism and so makes life easy for non-physicalists (not sure why this is held to be one of the pros, exactly). Against; well, maybe that context is dominating too much? Also the physicalist line fails to acknowledge Chalmers’ own naturalist but non-physicalist solution (it fails to acknowledge lots of other potential solutions too, so I’m not quite clear why Chalmers gets this special status at this point – though of course he did play a key role in defining the Hard Problem).

Number two’s pros and cons are mostly weaker versions of number one’s. It too is relatively well-focused. It does not identify the Hard Problem with the Explanatory Gap (that could be a con rather than a pro in my humble opinion). It fits fairly well in context and makes life relatively easy for traditional non-physicalists. It may yield a bit too much to the context of physics and it may be too narrow.

Number three has the advantage of focusing on the basics, and Dorsey thinks it gives a nice clear line between Hard and Easy problems. It provides a unifying approach – but it neglects the physical, which has always been central to discussion.

Number four provides a fully extended version of the problem, and makes sense of the literature by bringing in eliminativism. In a similar way it gives no-one a free pass; everyone has to address it. However, in doing so it may go beyond the bounds of a single problem and extend the issues to a wide swathe of philosophy of mind.

Dorsey thinks the answer is somewhere between 2 and 3; I’m more inclined to think it’s most likely between 1 and 2.

Let’s put aside the view that phenomenal consciousness cannot be explained. There are good arguments for that conclusion, but to me they amount to opting out of a game which is by no means clearly lost. So the problem is to explain how phenomenal consciousness arises. The explanation surely has to fit into some ontology, because we need to know what kind of thing phenomenal experience really is. My view is that the high-level differences between ontologies actually matter less than people have traditionally thought. Look at it this way: if we need an ontology, then it had better be comprehensive and consistent.  Given those two properties, we might as well call it a monism. because it encompasses everything and provides one view, even if that one view is complex.

So we have a monism: but it might be materialism, idealism, spiritualism, neutral monism, or many others. Does it matter? The details do matter, but if we’ve got one substance it seems to me it doesn’t matter what label we apply. Given that the material world and its ontology is the one we have by far the best knowledge of, we might as well call it materialism. It might turn out that materialism is not what we think, and it might explain all sorts of things we didn’t expect it to deal with, but I can’t see any compelling reason to call our single monist ontology anything else.

So what are the details, and what ontology have we really got? I’m aware that most regulars here are pretty radical materialists, with some exceptions (hat tip to cognicious); people who have some difficulty with the idea that the cosmos has any contents besides physical objects; uncomfortable with the idea of ideas (unless they are merely conjunctions of physical objects) and even with the belief that we can think about anything that isn’t robustly physical (so much for mathematics…). That’s not my view. I’m also a long way from from being a Platonist, but I do think the world includes non-physical entities, and that that doesn’t contradict a reasonable materialism. The world just is complex and in certain respects irreducible; probably because it’s real. Reduction, maybe, is essentially a technique that applies to ideas and theories: if we can come up with a simpler version that does the job,  then the simpler version is to be adopted. But it’s a mistake to think that that kind of reduction applies to reality itself; the universe is not obliged to conform to a flat ontology, and it does not. At the end of the day – and I say this with the greatest regret – the apprehension of reality is not purely a matter of finding the simplest possible description.

I believe the somewhat roomier kind of materialism I tend to espouse corresponds generally with what we should recognise as the common sense view, and this yields what might be another conception of the Hard Problem…

  1. …arises from the physical (in a way consistent with common sense)

 

Markram’s Electric Gland

Henry MarkramThe brain is not a gland. But Henry Markram seems to be in danger of simulating one – a kind of electric gland.

What am I on about? The Blue Brain Project has published details of its most ambitious simulation yet; a computer model of a tiny sliver of rat brain. That may not sound exciting on the face of it, but the level of detail is unprecedented…

The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm3 containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses.

The results are good. Without parameter tuning – that is, without artificial adjustments to make it work the way it should work – the digital simulation produces patterns of electrical activity that resemble those of real slivers of rat brain. The paper is accessible here. It seems a significant achievement and certainly attracted a lot of generally positive attention – but there are some significant problems. The first is that the methodological issues which were always evident remain unresolved. The second is certain major weaknesses in the simulation itself. The third is that as a result of these weaknesses the simulation implicitly commits Markram to some odd claims, ones he probably doesn’t want to make.

First, the methodology. The simulation is claimed as a success, but how do we know? If we’re simulating a heart, then it’s fairly clear it needs to pump blood. If we’re simulating a gland, it needs to secrete certain substances. The brain? It’s a little more complicated. Markram seems implicitly to take the view that the role of brain tissue is to generate certain kinds of electrical activity; not particular functional outputs, just generic activity of certain kinds.

One danger with that is a kind of circularity. Markram decides the brain works a certain way, he builds a simulation that works like that, and triumphantly shows us that his simulation does indeed work that way. Vindication! It could be that he is simply ignoring the most important things about neural tissue, things that he ought to be discovering. Instead he might just be cycling round in confirmation of his own initial expectations. One of the big dangers of the Blue Brain project is that it might entrench existing prejudices about how the brain works and stop a whole generation from thinking more imaginatively about new ideas.

The Blue simulation produces certain patterns of electrical activity that look like those of real rat brain tissue – but only in general terms. Are the actual details important? After all, a string of random characters with certain formal constraints looks just like meaningful text, or valid code, or useful stretches of DNA, but is in fact useless gibberish. Putting in constraints which structure the random text a little and provide a degree of realism is a relatively minor task; getting output that’s meaningful is the challenge. It looks awfully likely that the Blue simulation has done the former rather than the latter, and to be brutal that’s not very interesting. At worst it could be like simulating an automobile whose engine noise is beautifully realistic but never moves. We might well think that the project is falling into the trap I mentioned last time: mistaking information about the brain for the information in the brain.

Now it could be that actually the simulation is working better than that; perhaps it isn’t as generic as it seems, perhaps this particular bit of rat brain works somewhat generically anyway; or perhaps somehow in situ the tissue trains or conditions itself, saving the project most of the really difficult work. The final answer to the objections above might be if the simulation could be plugged back into a living rat brain and the rat behaviour shown to continue properly. If we could do that it would sidestep the difficult questions about how the brain operates; if the rat behaves normally, then even though we still don’t know why, we know we’re doing it right. In practice it doesn’t seem very likely that that would work, however: the brain is surely about producing specific control outputs, not about glandularly secreting generic electrical activity.

A second set of issues relates to the limitations of the simulation. Several of the significant factors I mentioned above have been left out; notably there are no astrocytes and no neurotransmitters. The latter is a particularly surprising omission because Markram himself has in the past done significant work on trying to clarify this area in the past. The fact that the project has chosen to showcase a simulation without them must give rise to a suspicion that its ambitions are being curtailed by the daunting reality; that there might even be a dawning realisation internally that what has been bitten off here really is far beyond chewing and the need to deliver has to trump realism. That would be a worrying straw in the wond so far as the project’s future is concerned.

In addition, while the simulation reproduces a large number of different types of neuron, the actual wiring has been determined by an algorithm. A lot depends on this: if the algorithm generates useful and meaningful connections then it is itself a master-work beside which the actual simulation is trivial. If not, then we’re back with the question of whether generic kinds of connection are really good enough. They may produce convincing generic activity, and maybe that’s even good enough for certain areas of rat brain, but we can be pretty sure it isn’t good enough for brain activity generally.

Harking back for a moment to methodology, there’s still something odd in any case about trying to simulate a process you don’t understand. Any simulation reproduces certain features of the original and leaves others out. The choice is normally determined by a thesis about how and why the thing works: that thesis allows you to say which features are functionally necessary and which are irrelevant. Your simulation only models the essential features and its success therefore confirms your view about what really matters and how it all operates. Markram, though, is not starting with an explicit thesis. One consequence is that it is hard to tell whether his simulation is a success or not, because he didn’t tell us clearly enough in advance what it was he was trying to make happen. What we can do is read off the implicit assumptions that the project cannot help embodying in its simulation. To hold up the simulation as a success is to make the implicit claim that the brain is basically an electrical network machine whose essential function is to generate certain general types of neural activity. It implicitly affirms that the features left out of the simulation – notably the vast array and complex role of neural transmitters and receptors – are essentially irrelevant. That is a remarkable claim, quite unlikely, and I don’t think it’s one Markram really wants to make. But if he doesn’t, consistency obliges him to downplay the current simulation rather than foreground it.

To be fair, the simulation is not exactly being held up as a success in those terms. Markram describes it as a first draft. That’s fair enough up to a point (except that you don’t publish first drafts), but if our first step towards a machine that wrote novels was one that generated the Library of Babel (every possible combination of alphabetic characters plus spaces) we might doubt whether we were going in the right direction. The Blue Brain project in some ways embodies technological impatience; let’s get on and build it and worry about the theory later. The charge is that as a result the project is spending its time simulating the wrong features and distracting attention from the more difficult task of getting a real theoretical understanding; that it is making an electric gland instead of a brain.

Equivocating uploads

brain copyInformation about the brain is not the same as information in the brain; yet in discussions of mind uploading, brain simulation, and mind reading the two are quite often conflated or confused. Equivocating between the two makes the task seem far easier than it really is. Scanners of various kinds exist, after all, and have greatly improved in recent years; technology usually goes on improving over time. If all we need is to get a really good scan of the brain in order to understand it, then surely it can only be a matter of time? Alas, information about the brain is an inadequate proxy for the information in the brain that we really need.

We’re often asked to imagine a scanner that can read off the structural details of a brain down to any required level of detail. Usually we’re to assume this can be done non-destructively, or even without disturbing the content and operation of the brain at all. These are of course unlikely feats, not just beyond existing technology but rather hard to imagine even on the most optimistic view of future developments. Sometimes the ready confidence that this miracle will one day be within our grasp is so poorly justified I find it tempting to think that the belief in the possibility of such magic scans is being buoyed up not by sanguine technological speculation but unconsciously by much older patterns of thinking; that the mind is located in breath, or airy spirits, or some other immaterial substance that can be sucked out of a body and replaced without physical consequences. Of course on the other side it’s perfectly true that lots of things once considered impossible are now routinely achieved.

But even allowing ourselves the most miraculous knowledge of the brain’s structure, so what? We could have an exquisite plan of the structure of a disk or a book without knowing what story it contained. Indeed, it would only take a small degree of inaccuracy or neglect in copying to leave us with a duplicate that strongly resembled the original but actually reproduced none of the information bearing elements; a disk with nothing on it, a book with random ink patterns.

Yeah but, the optimists say; the challenge may be huge, the level of detail required orders of magnitude beyond anything previously attempted, but if we copy something with enough fidelity the information just is going to come along with the copy necessarily. A perfect copy just has to include a perfect copy of the information. Granted, in the case of a book it’s not much use if we have the information but don’t know how to read it. The great thing about simulating a brain, though, is that we don’t even need to understand. We can just set it up and start it going. We may never know directly what the information in the brain was, but it’ll do its job; the mind will upload, the simulation will run.

In the case of mind reading the marvellous flexibility of the mind also offers us a chance to cheat by taking some measurable, controllable brain function and simply using it as a signalling device. It works, up to a point, but it isn’t clear why brain communication by such lashed-up indirect routes is any more telepathy than simply talking to someone; in both cases the actual information in the brain remains inaccessible except through a deliberate signalling procedure.

Now of course a book or a disk is in some important ways actually a far simpler challenge than a brain. The people who designed, made, and use the disk or the book take great care to ensure that a specific, readily readable set of properties encodes the information required in a regular, readable form. These are artefacts designed to carry information, as is a computer. The brain is not artefactual and does not need to be legible. There’s no need for a clear demarcation between information-bearing elements and the rest, and there’s no need for a standardised encoding or intelligible structures. There are, in fact many complex elements that might have a role in holding information.

Suppose we recalibrated our strategy and set out to scan just the information in the brain; what would we target? The first candidate these days is the connectome; the overall pattern of neural connections within the brain. There’s no doubt this kind of research is currently very lively and interesting – see for example this recent study. Current research remains pretty broad brush stuff and it’s not really clear how much detail will ever be manageable; but what if we could map the connections perfectly? How could we read off the content? It’s actually highly unlikely that all the information in the brain is encoded as properties of a network. The functional state of a neuron depends on many things, in particular the receptors and transmitters; the large known repertoire of these has greatly increased in recent years. We know that the brain does not operate simply through electrical transmission, with chemical controls from the endocrine system and elsewhere playing a large and subtle part. It’s not at all unlikely that astrocytes, the non-neural cells in the brain, have a significant role in modulating and even controlling its activity. It’s not at all unlikely, on the other hand, that ephaptic coupling or other small electrical induction effects have a significant role, too. And while myself I wouldn’t place any bets on exotic quantum physics being relevant, as some certainly believe, I think it would be very rash to assume that biology has no further tricks up its sleeve in the shape of important mechanisms we haven’t even noticed yet.

None of that can currently be ruled out of court as irrelevant. A computer has a specified way of working and if electrical interference starts changing the value of some bits in working memory you know it’s a bug, not a feature. In the brain, it could be either; the only way to judge is whether we like the results or not. There’s no reason why astrocyte states, say, can’t be key for one brain region or for one personality, and irrelevant for others, or even legitimate at one time and unhelpful interference at others. We just cannot know what to point our magic scanner at, and it may well be that the whole idea of information recorded in but distinct from a substrate just isn’t appropriate.

Yeah but again, total perfect copy? In principle if we get everything, we get everything, don’t we? The problem is that we can’t have everything. Copying, simulating, or transmitting necessarily involve transitions during which some features are unavoidably left out. Faith in the possibility of a good copy rests on the belief that we can identify a sufficient set of relevant features; so long as those are preserved, we’re good. We’re optimistic that one day we can do a job on the physical properties which is good enough. But that’s just information about the brain.