chiantiIt has always seemed remarkable to me that the ingestion of a single substance can have such complex effects on behaviour. Alcohol does it, in part, by promoting the effects of inhibitory neurotransmitters and suppressing the effects of excitatory ones, while also whacking up a nice surge of dopamine – or so I understand. This messes up co-ordination and can lead to loss of memory and indeed consciousness; but the most interesting effect, and the one for which alcohol is sometimes valued, is that it causes disinhibition. This allows us to relax and have a good time but may also encourage risky behaviour and lead to us saying things – in vino veritas – we wouldn’t normally let out.

Curiously, though, there’s no solid scientific support for the idea that alcohol causes disinhibition, and good evidence that alcohol is blamed for disinhibition it did not cause. One of the slippery things about the demon drink is that its effects are strongly conditioned by the drinkers expectations. It has been shown that people who merely thought they were consuming alcohol were disinhibited just as if they had been; while other studies have shown that risky sexual behaviour can actually be deterred in those who have had a few drinks, if the circumstances are right.

One piece of research suggests that meta-consciousness is impaired by alcohol; drink makes us less aware of our own mental state. But a popular and well-supported theory these days is that drinking causes ‘alcohol myopia’. On this theory, when we’re drunk we lose track of long-term and remote factors, while our immediate surroundings seem more salient. One useful aspect of the theory is that it explains the variability of the effects of alcohol. It may make remoter worries recede and so leave us feeling unjustifiably happy with ourselves; but if reminders of our problems are close while the long term looks more hopeful, the effect may be depressing. Apparently subjects who had the words ‘AIDS KILLS’ actually written on their arm were less likely to indulge in risky sex (I suspect it might kind of dent your chances of getting a casual partner, actually).

A merry and appropriately disinhibited Christmas to you!

salienceI was interested to see reports here and there the other day that scientists had discovered the seat of the will in the anterior midcingulate cortex.

That’s not precisely the case, of course; there’s an article here which describes the research. The scientists in question had an unusual opportunity to use electrodes in the brains of two patients; although they did indeed operate in the anterior midcingulate cortex they believe they were stimulating the brain’s salience network, which is quite widely distributed. The effect was apparently to create feelings of needing to persist against challenging circumstances; the researchers themselves call it “the will to persevere”. The patients were fully conscious and able to describe their feelings, but alas no tests were carried out to see whether they were in fact more persistent when stimulated.

The correct interpretation of the results seems difficult to me. As I understand it, the theory of the salience network holds that brain activity is controlled by neural networks which stretch across several regions of the brain. The default mode network, or DMN, is the one that operates when we’re not focused on anything in particular, perhaps daydreaming. It has been suggested that loss of this function is what distinguishes people who have “locked-in” syndrome from those who are in a “persistent vegetative state” – if you lose your DMN you’re not really there any more, in other words.

When we concentrate on a task, another network takes over – the central executive network, or CEN. The role of the salience network, if I’ve got this right, is primarily to act as arbitrator between the two. It spots something that deserves attention – something salient, indeed – and switches control from DMN to CEN. That’s fine, but it doesn’t seem to have much to do with persistence; it’s actually about changing the object of attention, not sticking with it. But perhaps strong or continuing stimulation of the salience network has that kind of effect. The salience network says “you need to look at this”, so perhaps when it operates emphatically we think “yes, I’m going to look the hell out of that alright; I’m going to look at that intensively; when I’ve finished looking at that, by golly it’s going to stay looked at”.

More plausibly it might all be to do with physiological effects; besides directing attention the salience network has a role in gearing up our “fight or flight” state, and it might just be that in that state we feel ready for a challenge ( in which case a readiness to persist comes into it, but surely isn’t the whole point); that would be a William James style emotion, originally a matter of the gut more than the mind.

Anyway, this really has nothing to do with an organ of the will. That is an interesting notion, though, isn’t it? My own assumption is that the will emerges from the operation of general cognition and that there couldn’t be a separate will module. If such a module determined the actions to be willed, it would surely have to encompass almost the whole of cognition, and so be far more than just a module; if it merely willed the actions selected for it by the rest of the brain it wouldn’t amount to much at all.

People do, of course, often hypothesise that there might be a special function for assigning value, or flagging up those things we ought to pursue as desirable. To me, though, that seems a bit different; the will, properly understood, is not a matter of basic motivation, but a faculty which might over-ride that motivation, either by operating at a meta level or simply by acting as a restraining and countervailing force.

Would that even be a distinct faculty of its own? Some would probably question whether talking about the will is a useful approach at all, rather than a relic from outmoded ideas about the soul controlling the body through acts of will. I must admit I find it hard to think of any subject that can’t be adequately discussed without mentioning  the will. Even free will doesn’t really lose anything if we talk about free action.  So is the will even worth persisting with? I can feel my DMN kicking in…

SocratesWhy is it that we can’t solve the mind/body problem? Well, if we define that problem as being about the capacity of mental events to cause physical events, there is a project in progress at Durham University that says it’s about the lack of good philosophy, or more specifically, that our problem stems from inadequate ontology (ontology being the branch of metaphysics that deals with what there is).  The project has been running for a few years, and now a substantial volume of corrective metaphysics has been published, with a thoughtful and full review here.  (Hat-tip to Micha for drawing this to my attention).

 The book is not a manifesto, because the authors do not share a single view: it’s more like an exhibition. What’s on offer here is a variety of philosophical views of mental causation, all more sophisticated than the ones we typically encounter in discussions of  artificial intelligence. The review gives a good sense of what’s on offer, and depending on your inclinations you may see it as a collection of esoteric and unhelpful complications, or as a magic sweetshop whose every jar holds the way to a new world of possibility and enlightenment. I think the average view will see it as a bookshop with many volumes of dull sermons and outdated almanacs which might nevertheless just be holding somewhere in the dusty back room that one book that makes sense of everything.

Is it likely that better philosophy will deliver the answer? There is  a horrid vision in my mind in which the neurologists and/or the AI people produce a model which seems to work; we’re able to build machines which talk to us in the same way as human beings, and we can explain exactly how the brain does its stuff and how it is analogous to these machines: but the philosophers go on doubting whether this machine consciousness is real consciousness. No-one else cares.

Moreover, there are some identifiable weaknesses in philosophy which are clearly on display in the current volume. First is the fissiparous nature of philosophical discussion. I said this was an exhibition rather than a manifesto; but wouldn’t a manifesto have been better? It’s not achievable because every philosopher has his or her own view and the longer discussion goes on the more possible views there are. In one way it’s a pleasing, exploratory quality, but if you want a solution it’s a grave handicap. Second, and related, there’s no objective test beyond logical consistency. Experiments will never prove any of these views wrong.

Third, although philosophy is too difficult, it’s also too easy. Someone somewhere once said that Aristotle’s problem was that he was too clever. For him, it was always possible to come up with a theory which justified the outlook of a complacent middle-aged Ancient Greek: theories which have turned out, so far as we can test them, to be almost invariably false or incomplete. Less clever pre-Socratic philosophers like Heraclitus or Parmenides were forced to adopt weirder points of view which in the long run might actually tell us more.

The current volume, I think, might contain many cases of clever people making cases that broadly  justify common sense while the real truth may be out there in the wild regions beyond. E.J.Lowe, one of the editors of the book and champions of the project, has a view about the powers of the will. He characterises powers as active or passive on the one hand and causal or non-causal. This leaves open the possibility of a power which is both active and non-causal. He wants the human will to have these properties, so that it is  spontaneous and not causally inefficacious without the agent per se thereby bringing about any sort of effect (if I’ve got that right). The spontaneity is supposed to resemble the spontaneity of the decay of a specific radium atom, and hence be consistent with physics, while the causal  efficacy is of a kind that does  not require an interruption of normal physics while still being an important corollary of our status as rational beings.

This is clever stuff, no doubt, but it looks like an attempt – you may consider it a doomed attempt – to explain away the problems with our common sense views rather than correcting them. We’re being offered loopholes which may – debatably – let us carry on thinking what we’ve always thought, rather than offering us a new perspective. It leaves me feeling the way I might feel after a clever lawyer has explained why his client should not be convicted; yeah, but did he do it? There’s no ‘aha!’ moment on offer. In her review Sara Bernstein suggests that sceptics may be inclined to turn back to reductionism, and I must confess that is indeed my inclination.

Still, I can’t shake my hope that somewhere in that dusty old bookshop the truth is to be found, and so I can’t help wishing the project well.

quarkOne of the main objections to panpsychism, the belief that mind, or at any rate experience, is everywhere, is that it doesn’t help. The point of a theory is to take an issue that was mysterious to begin with and make it clear; but panpsychism seems to leave us with just as much explaining to do as before. In fact, things may be worse. To begin with we only needed to explain the occurrence of consciousness in the human brain; once we embrace panpsychism we have to explain it’s occurrence everywhere and account for the difference between the consciousness in a lump of turf and the consciousness in our heads. The only way that could be an attractive option would be if there were really good and convincing answers to these problems ready to hand.

Creditably, Patrick Lewtas recognises this and rolling up his sleeves has undertaken the job of explaining first, how ‘basic bottom level experience’ makes sense, and second, how it builds up to the high-level kind of experience going on in the brain. A first paper, tackling the first question, “What is it like to be a Quark” appeared in the JCS recently (Alas, there doesn’t seem to be an online version available to non-subscribers.)

Lewtas adopts an idiosyncratic style of argument, loading himself with Constraints like a philosophical Houdini.

  1. Panpsychism should attribute to basic physical objects all but only those types of experiences needed to explain higher-level (including, but not limited to, human) consciousness.
  2. Panpsychism must eschew explanatory gaps.
  3. Panpsychism must eschew property emergence.
  4. Maximum possible complexity of experience varies with complexity of physical structure.
  5. Basic physical objects have maximally simple structures. They lack parts, internal structure, and internal processes.
  6. Where possible and appropriate, panpsychism should posit strictly-basic conscious properties similar, in their higher-order features to strictly-basic physical properties.
  7. Basic objects with strictly-basic experiences have the constantly and continuously.
  8. Each basic experience-type, through its strictly-basic  instances. characterizes (at least some) basic physical objects.

Of course it is these very constraints that end up getting him where he wanted to be all along.  To justify each of them and give the implications would amount to reproducing the paper; I’ll try to summarise in a freer style here.

Lewtas wants his basic experience to sit with basic physical entities and he wants it to be recognisably the same kind of thing as the higher level experience. This parsimony is designed to avoid any need for emergence or other difficulties; if we end up going down that sort of road, Lewtas feels we will fall back into the position where our theory is too complex to be attractive in competition with more mainstream ideas. Without seeming to be strongly wedded to them, he chooses to focus on quarks as his basic unit, but he does not say much about the particular quirks of quarks; he seems to have chosen them because they may have the property he’s really after; that of having no parts.

The thing with no parts! Aiee! This ancient concept has stalked philosophy for thousands of years under different names: the atom, a substance, a monad (the first two names long since passed on to other, blameless ideas). I hesitate to say that there’s something fundamentally problematic with the concept itself (it seems to work fine in geometry); but in philosophy it seems hard to handle without generating a splendid effusion of florid metaphysics.  The idea of yoking it together with the metaphysically tricky modern concept of quarks makes my hair stand on end. But perhaps Lewtas can keep the monster in check: he wants it, presumably, because he wants to build on bedrock, with no question of basic experience being capable of further analysis.

Some theorists, Lewtas notes, have argued that the basic level experience of particles must be incomprehensible to us; as incomprehensible as the experiences of bats according to Nagel, or indeed even worse. Lewtas thinks things can, and indeed must, be far simpler and more transparent than that. The experience of a quark, he suggests, might just be like the simple experience of red; red detached from any object or pattern, with no limits or overtones or significance; just red.  Human beings can most probably never achieve such simplicity in its pure form, but we can move in that direction and we can get our heads around ‘what it’s like’ without undue difficulty.

Now the partless thing begins to give trouble; a thing which has no parts cannot change, because change would imply some kind of reorganisation or substitution; you can’t rearrange something that has no parts and if you substitute anything you have to substitute another whole thing for the first one, which is not change but replacement. At best the thing’s external relations can change. If one of the properties of the quark is an experience of red, therefore, that’s how it stays. It carries on being an experience of red, and it does not respond in any way to its environment or anything outside itself. I think we can be forgiven if we already start to worry a little about how this is going to work with a perceptual system, but that is for the later paper.

Lewtas is aware that he could be in for an awfully large catalogue of experiences here if every possible basic experience has to be assigned to a quark. His hope is that some experiences will turn out to be composites, so that we’ll be able to make do with a more restricted set: and he gives the example of orange experience reducing to red and yellow experience. A bad example: orange experience is just orange experience, actually, and the fact that orange paint can be made by mixing red and yellow paint is just a quirk of the human visual system, not an essential quality of orange light or orange phenomenology. A bad example doesn’t mean the thesis is false; but a comprehensive reduction of phenomenology to a manageable set of basic elements is a pretty non-trivial requirement. I think in fact Lewtas might eventually be forced to accept that he has to deal with an infinite set of possible basic experiences. Think of the experience of unity, duality, trinity…  That’s debatable, perhaps.

At any rate Lewtas is prepared to some extent. He accepts explicitly that the number of basic experiences will be greater than the number of different kinds of basic quark, so it follows that basic physical units must be able to accommodate more than one basic experience at the same time. So your quark is having a simple, constant experience of red and at the same time it’s having a simple, constant experience of yellow.

That has got to be a hard idea for Lewtas to sell. It seems to risk the simple transparency which was one of his main goals, because it is surely impossible to imagine what having two or more completely pure but completely separate experiences at the same time is like.  However, if that bullet is bitten, then I see no particular reason why Lewtas shouldn’t allow his quarks to have all possible experiences simultaneously (my idea, not his).

By the time we get to this point I find myself wondering what the quarks, or the basic physical units, are contributing to the theory. It’s not altogether clear how the experiences are anchored to the quarks and since all experiences are going to have to be readily available everywhere, I wonder whether it wouldn’t simplify matters to just say that all experiences are accessible to all matter. That might be one of the many issues cleared up in the paper to follow where perhaps, with one cat-like leap, Lewtas will escape the problems which seem to me to be on the point of having him cornered…

poppyIt may be a little off our usual beat, but Graham Hancock’s piece in the New Statesman (longer version here) raised some interesting thoughts.

It’s the ‘war on drugs’ that is Hancock’s real target, but he says it’s really a war on consciousness…

This extraordinary imposition on adult cognitive liberty is justified by the idea that our brain activity, disturbed by drugs, will adversely impact our behaviour towards others. Yet anyone who pauses to think seriously for even a moment must realize that we already have adequate laws that govern adverse behaviour towards others and that the real purpose of the “war on drugs” must therefore be to bear down on consciousness itself.

That doesn’t seem quite right. It’s true there are weak arguments for laws against drugs – some of them based on bad consequences that arguably arise from the laws rather than the drugs – but there are reasonable ones, too. The bedrock point is that taking a lot of psychoactive drugs is probably bad for you. Hancock and many others might say that we should have the right to harm ourselves, or at any rate to risk harm, if we don’t hurt anyone else, but that principle is not, I think, generally accepted by most legislatures. Moreover there are special arguments in the case of drugs. One is that they are addictive.  ‘Addiction’ is used pretty widely these days to cover any kind of dependency or habit, but I believe the original meaning was that an addict became physically dependent, unable to stop taking the drug without serious, possibly even fatal consequences, while at the same time ever larger doses were needed to achieve relief. That is clearly not a good way to go, and it’s a case where leaving people to make up their own minds doesn’t really work because of the dependency. Secondly, drugs may affect the user’s judgement and for that reason too should arguably be a case where people are not left to judge risks for themselves.

Now, as a matter of fact neither of those arguments may apply in the case of some restricted drugs – they may not be addictive in that strongest sense and they may not remove the user’s ability to judge risks; and the risks themselves may in some cases have been overstated; but we don’t have to assume that governments are simply set on denying us the benefits of enhanced consciousness.

What would those benefits be? They might be knowledge, enhanced cognition, or simple pleasure. We could also reverse the argument that Hancock attributes to our rulers and suggest that drugs make people less likely to harm others. People who are lying around admiring the wallpaper in a confused manner are not out committing crimes, after all.

Enhanced cognition might work up to a point in some cases: certain drugs really do help dispel fatigue or anxiety and sharpen concentration in the short term. But the really interesting possibility for us is that drug use might allow different kinds of cognition and knowledge. I think the evidence on fathoming the secrets of the Universe is rather discouraging. Drugs may often make people feel as if they understand everything, but it never seems to be possible to write the insights down. Where they are written down, they turn out to be like the secret of the cosmos apprehended by Oliver Wendell Holmes under the influence of ether; later he discovered his notes read “A strong smell of turpentine prevails throughout”.

But perhaps we’re not dealing with that kind of knowledge. Perhaps instead drugs can offer us the kind of ineffable knowledge we get from qualia? Mary the colour scientist is said to know something new once she has seen red for the first time; not something about colour that could have been written down, or ex hypothesi she would have known it already, but what it is like. Perhaps drugs allow us to experience more qualia, or even super qualia; to know what things are like whose existence we should not otherwise have suspected. Terry Pratchett introduced the word ‘knurd’ to describe the state of being below zero on the drunkenness scale; needing a drink to bring you up to the normal mental condition: perhaps philosophical zombies, who experience no qualia, are simply in a similar state with respect to certain drugs.

That seems plausible enough, but it raises the implication that normal qualia are also in fact delusions (not an uncongenial implication for some). For drugs there is a wider problem of non-veridicality. We know that drugs can cause hallucinations, and as mentioned above, can impart feelings of understanding without the substance. What if it’s all like that? What if drug experiences are systematically false? What if we don’t really have any new knowledge or any new experiences on drugs, we just feel as if we have? For that matter, what about pleasure? What if drugs give us a false memory of having had a good time – or what if they make us think we’re having a good time now although in reality we’re not enjoying it at all? You may well feel that last one is impossible, but it doesn’t pay to underestimate the tricksiness of the mind.

Well, many people would say that the feeling of having had a good time is itself worth having, even if the factual element of the feeling is false. So perhaps in the same way we can say that even if qualia are delusions, they’re valuable ones. Perhaps the exalted places to which drugs take us are imaginary; but just because somewhere doesn’t exist doesn’t mean it isn’t worth going there. For myself I generally prefer the truth (no argument for that, just a preference) and I think I generally get it most reliably when sober and undrugged.

Hancock, at any rate, has another kind of knowledge in mind. He suggests that the brain may turn out to be, not a generator of consciousness but rather a receiver, tuned in to the psychic waves where, I assume, our spiritual existence is sustained. Drugs, he proposes, might possibly allow us to twiddle the knobs on our mental apparatus so as to receive messages from others: different kinds of being or perhaps people in other dimensions. I’m not quite clear where he draws the line between receiving and existing, or whether we should take ourselves to be in the brain or in the spiritual ether. If we’re in the brain, then the signals we’re receiving are a form of outside control which doesn’t sound very nice: but if we’re really in the ether then when the signals from other beings are being received by the brain we ought to lose consciousness, or at least lose control of our bodies, not just pick up a message. No doubt Hancock could clarify, given a chance, but it looks as if there’s a bit of work to be done.

But let’s not worry too much, because the idea of the brain as a mere receiver seems highly dubious.  We know now that very detailed neuronal activity is associated with very specific mental content, and as time goes on that association becomes ever sharper. This means that if the brain is a receiver the signals it receives must be capable of influencing a vast collection of close-packed neurons in incredibly exquisite detail. It’s only fair to remember that a neurologist as distinguished as Sir John Eccles, not all that long ago, thought this was exactly what was happening; but to me it seems incompatible with ordinary physics. We can manipulate small areas of the brain from outside with suitable equipment, but dictating its operation at this level of detail, and without any evident physical intervention seems too much. Hancock says the possibility has not been disproved, and for certain standards of proof that’s right; but I reckon by the provisional standards that normally apply for science we can rule out the receiver thesis.

Speaking of manipulating the brain from outside, it seems inevitable to me that within a few years we shall have external electronic means of simulating the effects of certain drugs, or at least of deranging normal mental operation in a diverting and agreeable way. You’ll be able to slip on a helmet, flick a switch, and mess with your mind in all sorts of ways. They might call it e-drugs or something similar, but you’ll no longer need to buy dodgy chemicals at an exorbitant mark-up. What price the war on drugs or on consciousness then?

catandaliensEric Thomson has some interesting thoughts about aliens, cats, and consciousness, contained in four posts on Brains.  In part 1 he proposes a race of aliens who are philosophical zombies – that is, they lack subjective experience; their brains work fine but there is, as it were, no-one home; no qualia. (I have to say that it’s not fully clear to me that cats actually have full consciousness in the human sense rather than being fairly single-minded robots seeking food, warmth, etc: but let’s not quibble.)

These aliens come to earth and decide, for their own good reasons, to set about quietly studying the brains of cats. They are pretty good at this; they are able to analyse the workings of the cat brain comprehensively and in doing so they discover that feline brains have a special mode of operation. The aliens name this ‘smonsciousness’; most of the cats’ brain activity is unsmonscious, but certain important functions take place smonsciously. The aliens are able to work out the operation of smonsciousness in some detail and get an understanding of it which is comprehensive except for the minor detail that unbeknownst to them, they don’t really get what it actually is. How could they? They have nothing similar themselves.

Part 2 points out that this is a bit of a problem for materialists. The aliens have put together what seems to be a compete materialist account. In one way this seems like a crowning achievement. But it leaves out what it is like for the cats to have experiences. Thomson acknowledges that materialists can claim that this is simply a matter of two different perspectives on the same phenomenon, rather in the way that temperature and mean kinetic energy turn out to be the same thing viewed in different ways. It’s a conceptual difference, not an ontological one. But it would be unprecedented to understand something from the lower level without being aware of its higher level version; and in any case, ex hypothesi the aliens know all there is to know about every level of operation of the cat brain. Isn’t this an embarrassment for monist materialism?

Part 3 proposes that all might be well if we could wall off phenomenal experience by arguing it needs a special, separate set of concepts which you can only acquire by having phenomenal experience. No amount of playing around with neural concepts will ever get you to phenomenal concepts (rather in the way that Colin McGinn suggests that consciousness is subject to cognitive closure). Neuroscience suffers from semantic poverty in respect of phenomenal experience.

Thomson rightly suggests that this isn’t a very comfortable place for the materialist case to rest in and that it would be better for materialists if the semantic poverty idea could be done away with.

So in Part 4 he suggests a cunning manoeuvre. He has actually set the bar for the aliens fairly low: they don’t need to have a full grasp of phenomenal experience, hey merely need to become aware that in smonsciousnes something extra is going on (it could be argued that the bar is too low here, but I think it’s OK: if we demand much more we come perilously close to demanding that the aliens actually have phenomenal experience, which is surely too much?).

Now again for their own good reasons the aliens build a simulation of their own consciousness with an additional model which adds smonsciousness when powered up; they call this entity Keanu. Keanu functions fine in alien mode, and when they switch onm his smonsciousness he tells them something is going on which is totally inexpressible, other than by ‘Whoa…’  Now that may not seem satisfactory, but we can refine it further by supposing the aliens have powerful brain in which they can run the entire simulation. Keanu them is an internal thought experiment: an alien works it through mentally and ends up exclaiming “Dudes! We totally missed phenomenal experience!”  Hence the aliens become aware that something is missing and semantic poverty is vanquished.

What do we make of that? There are a lot of angles here; myself I’m suspicious of that interesting concept smonsciousness. It allows the aliens to understand consciousness perfectly in functional terms without knowing what it’s really about. Is this plausible? It means that when they consider the following:

O my Luve's like a red, red rose, 
That's newly sprung in June: 
O my Luve's like the melodie, 
That's sweetly play'd in tune.

….they know exactly and in full what caused Burns to use these words about red things and melodies as he did, but they have no idea at all what the essential significance of the words to Burns was. This is odd. If pressed sufficiently I think we are forced one of two ways. If we cling to the idea that smonsciousness gives a full explanation we are obliged to say that the actual experience of consciousness contributes nothing, and is in fact an epiphenomenon. That’s completely tenable, but I would not want to go that way. Alternatively, we have to concede that the aliens don’t really have a full understanding, and that smonsciousness isn’t really viable. In short, we are more or less back with the dilemma as before.

What about Keanu? Let’s consider the internalised version. It’s important to remember that running Keanu in your brain does not confer phenomenal experience, it makes you aware that another person of a certain kind would claim phenomenal experience. So what the alien says is not “Dudes! We totally missed phenomenal experience!” , but “Dudes! These cats have a really wild kind of delusion that makes them totally convinced that they’re having some kind of special experience – but they can’t describe it or what’s special about it in any way! What a crazy illusion!”. Now Thomson has set the bar low, but surely becoming aware that creatures with smonsciousness claim to be conscious is not quite high enough?

We are currently migrating to a new host: apologies for any disruption that may occur.  To contact me, please use this…e-mail

boilerplateI must admit I generally think of the argument over human-style artificial intelligence as a two-sided fight. There are those who think it’s possible, and those who think it isn’t. But a chat I had recently made it clear that there are really more differences than that, in particular among those who believe we shall one day have robot chums.

The key difference I have in mind is over whether there really is consciousness at all, or at least whether there’s anything special about it.

One school of thought says that there is indeed a special faculty of consciousness; but eventually machines of sufficient complexity will have it too. We may not yet have all the details of how this thing works; maybe we even need some special new secret. But one thing is perfectly clear; there’s no magic involved, nothing outside the normal physical account, and in fact nothing that isn’t ultimately computable. One day we will be able to build into a machine all the relevant qualities of a human mind. Perhaps we’ll do it by producing an actual direct simulation of a human brain, perhaps not; the point is, when we switch on that ultimate robot, it will have feelings and qualia, it will have moral rights and duties, and it will have the same perception of itself as a real existing personality, that we do.

The second school of thought agrees that we shall be able to produce a robot that looks and behaves exactly like a human being. But that robot will not have qualia or feelings or free will or any of the rest of it, because in reality human beings don’t have them either! That’s one of the truths about ourselves that has been helpfully revealed by the progress of AI: all those things are delusions and always have been. Our feelings that we have a real self, that there is phenomenal experience, and that somehow we have a special kind of agency, those things are just complicated by-products of the way we’re organised.

Of course we could split the sceptics too, between those who think that consciousness requires a special spiritual explanation, or is inexplicable altogether, and those who think it is a natural feature of the world, just not computational or not explained by any properties of the physical world known so far. There is clearly some scope for discussion between the former kind of believer and the latter kind of sceptic because they both think that consciousness is a real and interesting feature of the world that needs more explanation, though they differ in their assumptions about how that will turn out. Although there’s less scope for discussion, there’s also some common ground between the two other groups because both basically believe that the only kind of discussion worth having about consciousness is one that clarifies the reasons it should be taken off the table (whether because it’s too much for the human mind or because it isn’t worthy of intelligent consideration).

Clearly it’s possible to take different views on particular issues. Dennett, for example, thinks qualia are just nonsense and the best possible thing would be to stop even talking about them, while he thinks the ability of human beings to deal with the Frame Problem is a real and interesting ability that robots don’t have but could and will once it’s clarified sufficiently.

I find it interesting to speculate about which camp Alan Turing would have joined; did he think that humans had a special capacity which computers could one day share, or did he think that the vaunted consciousness of humans turned out to be nothing more than the mechanical computational abilities of his machines? It’s not altogether clear, but I suspect he was of the latter school of thought. He notes that the specialness of human beings has never really been proved; and a disbelief in the specialness of consciousness might help explain his caginess about answering the question “can machines think?”. He preferred to put the question aside: perhaps that was because he would have preferred to answer; yes, machines can think, but only so long as you realise that ‘thinking’ is not the magic nonsense you take it to be…

dysanaesthesiaThere have been a number of reports of a speech or presentation by Dr. Jaideep Pandit to the Annual Congress of the The Association of Anaesthetists of Great Britain and Ireland (AAGBI) in Dublin, in which he apparently proposed the existence of a ‘third’ state of consciousness.

Dr Pandit led the fifth annual survey (NAP5) by the AAGBI on the subject of accidental awareness, in which patients who have been anaesthetised for an operation become conscious during the procedure but are unable to do anything because some of the drugs they are normally given induce paralysis. Anaesthetists used to believe that even though patients couldn’t move, it was generally possible to tell when they were becoming aware, through signs of distress like the heartrate and sweating, allowing the anaesthetist to give further doses to correct the problem: but it has become clear that this is not always the case and that significant numbers of patients do go through severe pain, or are at least aware of what is going on, while on the operating table.

There are extremely difficult methodological issues involved in assessing how often this happens. Besides drugs to paralyse patients and remove the pain, they are typically given drugs which erase any memory. Of those who do become conscious during surgery, only a minority report it afterwards. On the other side, many patients may dream or confabulate an awareness that never really existed. Nevertheless the AAGBI’s annual surveys are a valuable and creditable effort to provide some data.

The point about the ‘third state’ does not arise so much from the survey, however, as separate research carried out by Dr Ian F Russell: I haven’t been able to track down the particular paper which I think was referred to in Dublin, but there’s a useful general article which covers the same experiments here. The techniques used in this case was to keep one forearm unparalysed and then ask the patient at repeated intervals to squeeze with their hand. Russell found that 44% of patients woulod respond even though they were otherwise apparently unconscious and had no recollection of the episode.

How aware wee these patients? They were able to hear and understand the surgeon and respond appropriately, yet we can, with some caveats, presume that they were not in pain. If they had been in agony, these patients, unlike most, coulod have waved their hand and would surely have done so – unless the state they were in somehow allowed them to respond to requests but not to initiate any action; or possibly left them too confused even to formulate the simple plan of attracting attention, but not so confused that they could not respond to a simple request. At any rate, it is this curiously ambivalent condition which has prompted Dr Pandit’s suggestion of a third state of consciousness.

We might observe that it probably isn’t really the third, but perhaps the seventh (or the seventeenth).  We already have dreaming, hypnosis, sleepwalking, and meditation to take into account, and very likely we could come up with more. The normal mental state of human beings appears to be complex and layered, and capable of breaking down or degrading in ways you wouldn’t have expected.

There’s a moral there for artificial general intelligence, I suppose: if you really want to model human thought, you’re going to need it to operate on a number of levels at once, rather than having a single ‘theatre’ or workspace where mental content does its stuff. It may of course be that we don’t especially want to model specifically human styles of thought. Perhaps a single level cognitive structure will prove perfectly good for most practical tasks; perhaps it will even be an improvement. That raises the interesting possibility of sitting down to have a conversation with a robot friend who seems to be pretty much like us, but has no unconscious. What would people with no unconscious be like? An outside possibility is that they would be fine at ‘easy problem’ stuff, but lack phenomenal depth; perhaps they would be the philosophical zombies hose lack of qualia has featured in so many papers.

More generally, the existence of a state in which we remember nothing and experience no pain, but comply helpfully with requests made to us in normal language suggests that either the self as normally envisaged is not as much in executive control as it thinks – a conclusion which would be supported by various other bits of evidence – or that it is less unitary than we generally suppose. Indeed, we might see this as at least persuasive evidence for the existence of a slightly unexpected degree of modularisation. The ‘third state’ and hypnotic states suggest that there might be a kind of ‘implementer’ function in the brain. This implementer is responsible for actually getting actions carried out; normally it reacts so fast to the least indication from our self-conscious interior monologue that we don’t even notice it; to us it just seems that what we wanted happened. But in certain unusual states the talky, remembering, self-conscious bit of our mind can be turned off while the butler-like implementer is still around and in the absence of the usual boss, quite happy to respond to instructions from outside (and quite capable of using all the language-processing functions of the brain to understand them with).

In its way I find that almost as unsettling as the AAGBI’s conclusions about the surprisingly frequent ineffectiveness of anaesthesia.

yogurtCan machines be moral agents? There’s a bold attempt to clarify the position in the
IJMC
, by Parthemore and Whitby (draft version). In essence they conclude that the factors that go to make a moral agent are the same whether the entity in question is biological or an artefact. On the face of it that seems like common sense – although the opposite conclusion would be more interesting; and there is at least one goodish reason to think that there’s a special problem for artefacts.

But let’s start at the beginning. Parthemore and Whitby propose three building blocks which any successful candidate for moral agency must have: the concept of self, the concept of morality, and the concept of concept.

By the concept of self, they mean not simply a basic awareness of oneself as an object in the landscape, but a self-reflective awareness along the lines of Damasio’s autobiographical self. Their rationale is not set out very explicitly, but I take it they think that without such a sense of self, your acts would seem to be no different from other events in the world; just things you notice happening, and that therefore you couldn’t be seen as responsible for them. It’s a reasonable claim, but I think it properly requires more discussion. A sceptic might make the case that there’s a difference between feeling you’re not responsible and actually not being responsible; that someone could have the cheerily floaty feeling of being a mere observer while actually carrying out acts that were premeditated in some level of the mind. There’s scope then for an extensive argument about whether conscious awareness of making a decision is absolutely necessary to moral ownership of the decision. We’ll save that for another day, and of course Parthemore and Whitby couldn’t hope to deal with every implication in a single paper. I’m happy to take their views on the self as reasonable working assumptions.

The second building block is the concept of morality. Broadly, Parthemore and Whitby want their moral agent to understand and engage with the moral realm; it’s not enough, they say, to memorise by rote learning a set of simple rules. Now of course many people have seemed to believe that learning and obeying a set of rules such as the Ten Commandments was necessary or even sufficient for being a morally good person. What’s going on here? I think this becomes clearer when we move on to the concept of concept, which roughly means that the agent must understand what they’re doing. Parthemore and Whitby do not mean that a moral agent must have a philosophical grasp of the nature of concepts, only that they must be able to deal with them in practical situations, generalising appropriately where necessary. So I think what they’re getting at is not that moral rules are invalid in themselves, merely that a moral agent has to have sufficient conceptual dexterity to apply them properly.  A rule about not coveting your neighbour’s goods may be fine, but you need to able to recognise neighbours and goods and instances of their being coveted, without needing say, a list of the people to be considered neighbours.

So far, so good, but we seem to be missing one item normally considered fundamental: a capacity for free action. I can be fully self-aware, understand and appreciate that stealing is wrong, and be aware that by picking up a chocolate bar without paying I am in fact stealing; but it won’t generally be considered a crime if I have a gun to my head, or have been credibly told that if I don’t steal the chocolate several innocent people will be massacred. More fundamentally, I won’t be held responsible if it turns out that because of internal factors I have no ability to choose otherwise: yet the story told by physics seems to suggest I never really have the ability to choose otherwise. I can’t have real responsibility without real free will (or can I?).

Parthemore and Whitby don’t really acknowledge this issue directly; but they do go on to add what is effectively a fourth requirement for moral agency; you have to be able to act against your own interests. It may be that this is in effect their answer; instead of a magic-seeming capacity for free will they call for a remarkable but fully natural ability to act unselfishly. They refer to this as akrasia; consciously doing the worse thing: normally I think the term refers to the inability to do what you can see is the morally right thing; here Parthemore and Whitby seem to reinterpret it as the ability to do morally good things which run counter to your own selfish interests.

There are a couple of issues with that principle. First, it’s not actually the case that we only act morally when going against our own interests; it’s just that those are the morally interesting cases because we can be sure in those instances that morality alone is the motivation. Worse than that, someone like Socrates would argue that moral action is always in your own best interests, because being a good person is vastly more important than being rich or successful; so no rational person who understood the situation properly would ever choose to do the wrong thing. Probably though, Parthemore and Whitby are really going for something a little different. They link their idea to personal boundaries, citing Andy Clark, so I think they have in mind an ability to extend sympathy or a feeling of empathy to others. The ability they’re after is not so much that of acting against your own interests as that of construing your own interests to include those of other entities.

Anyway, with that conception of moral agency established, are artefacts capable of qualifying? Parthemore and Whitby cite a thought-experiment of Zlatev: suppose someone who lived a normal life were found after death to have had no brain but a small mechanism in their skull: would we on that account disavow the respect and friendship we might have felt for the person during life? Zlatev suggests not; and Parthemore and Whitby, agreeing, proposing that it would make no difference if the skull were found to be full of yogurt; indeed, supposing someone who had been found to have yogurt instead of brains were able to continue their life, they would see no reason to treat them any differently on account their acephaly (galactocephaly?). They set this against John Searle’s view that it is some as-yet-unidentified property of nervous tissue that generates consciousness, and that a mind made out of beer cans is a patent absurdity. Their view, which certainly has its appeal, is that it is performance that matters; if an entity displays all the signs of moral sense, then let’s treat it as a moral being.

Here again Parthemore and Whitby make a reasonable case but seem to neglect a significant point. The main case against artefacts being agents is not the the Searlian view, but a claim that in the case of artefacts responsibility devolves backwards to the person who designed them, who either did or should have foreseen how they would behave; is at any rate responsible for their behaving as they do; and therefore bears any blame. My mother is not responsible for my behaviour because she did not design me or program my brain (well, only up to a point), but the creator of Robot Peter would not have the same defence; he should have known what he was bringing into the world. It may be that in Parthemore and Whitby’s view akrasia takes care of this too, but if so it needs explaining.

If Parthemore and Whitby think performance is what matters, you might think they would be well-disposed towards a Moral Turing Test; one in which the candidate’s ability to discuss ethical issues coherently determines whether we should see it as a moral agent or not. Just such a test was proposed by Allen et al, but in fact Parthemore and Whitby are not keen on it. For one thing, as they point out, it requires linguistic ability, whereas they want moral agency to extend to at least some entities with no language competence. Perhaps it would be possible to devise pre-linguistic tests, but they foresee difficulties: rightly, I think. One other snag with a Moral Turing Test would be the difficulty of spotting cases where the test subject had a valid system of ethics which nevertheless differed from our own; we might easily end up looking for virtuous candidates and ignore those who consciously followed the ethics of Caligula.

The paper goes on to describe conceptual space theory and its universal variant: an ambitious proposal to map the whole space of ideas in a way which the authors think might ground practical models of moral agency. I admire the optimism of the project, but doubt whether any such mapping is possible. Tellingly, the case of colour space, which does lend itself beautifully to a simple 3d mapping, is quoted: I think other parts of the conceptual repertoire are likely to be much more challenging. Interestingly I thought the general drift suggested that the idea was a cousin of Arnold Trehub’s retinoid theory, though more conceptual and perhaps not as well rooted in neurology.

Overall it’s an interesting paper: Parthemore and Whitby very reasonably say at several points that they’re not out to solve all the problems of philosophy; but I think if they want their points to stick they will unavoidably need to delve more deeply in a couple of places.