Time Travel Consciousness

CamembertCan you change your mind after the deed is done? Ezequiel Di Paolo thinks you can, sometimes. More specifically, he believes that acts can become intentional after they have already been performed. His theory, which seems to imply a kind of time travel, is set out in a paper in the latest JCS.

I think the normal view would be that for an act to be intentional, it must have been caused by a conscious decision on your part. Since causes come before effects, the conscious decision must have happened beforehand, and any thoughts you may have afterwards are irrelevant. There is a blurry borderline over what is conscious, of course; if you were confused or inattentive, if you were ‘on autopilot’ or you were following a hunch or a whim it may not be completely clear how consciously your action was considered.

There can, moreover, be what Di Paolo calls an epistemic change. In such a case the action was always intentional in fact, but you only realise that it was when you think about your own motives more carefully after the event. Perhaps you act in the heat of the moment without reflection; but when you think about it you realise that in fact what you did was in line with your plans and actually caused by them. Although this kind of thing raises a few issues, it is not deeply problematic in the same way as a real change. Di Paolo calls the real change an ontological one; here you definitely did not intend the action beforehand, but it becomes intentional retrospectively.

That seems disastrous on the face of it. If the intentionality of an act can change once, it can presumably change again, so it seems all intentions must become provisional and unreliable; the whole concept of responsibility looks in danger of being undermined. Luckily, Di Paolo believes that changes can only occur in very particular circumstances, and in such a way that only one revision can occur.

His view founds intentions in enactment rather than in linear causation; he has them arising in social interaction. The theory draws on Husserl and Heidegger, but probably the easiest way to get a sense of it is to consider the examples presented by Di Paolo. The first is from De Jaegher and centres, in fittingly continental style, around a cheese board.

De Jaegher is slicing himself a corner of Camembert and notices that his companion is watching in a way which suggests that he too, would like to eat cheese. DJ cuts him a slice and hands it over.
“I could see you wanted some cheese,” he remarks.
“Funny thing, that,” he replies, “actually, I wasn’t wanting cheese until you handed it to me; at that moment the desire crystallised and I now found I had been wanting cheese.”

In a corner of the room, Alice is tired of the party to do; the people are boring and the magnificent cheese board is being monopolised by philosophers enacting around it. She looks across at her husband and happens to scratch her wrist. He comes over.
“Saw you point at your watch,” he says, “yeah, we probably should go now. We’ve got the Stompers’ do to go to.”
Alice now realises that although she didn’t mean to point to her watch originally, she now feels the earlier intention is in place after all – she did mean to suggest they went.

At the Stompers’ there is dancing; the tango! Alice and Bill are really good, and as they dance Bill finds that his moves are being read and interpreted by Alice superbly; she conforms and shapes to match him before he has actually decided what to do; yet she has read him correctly and he realises that after the fact his intentions really were the ones she divined. (I sort of melded the examples.)

You see how it works? No, it doesn’t really convince me either. It is a viable way of looking at things, but it doesn’t compel us to agree that there was a real change of earlier intention. Around the cheese board there may always have been prior hunger, but I don’t see why we’d say the intention existed before accepting the cheese.

It is true, of course, that human beings are very inclined to confabulate, to make up stories about themselves that make their behaviour make sense, even if that involves some retrospective monkeying with the facts. It might well be that social pressure is a particularly potent source of this kind of thing; we adjust our motivations to fit with what the people around us would like to hear. In a loose sense, perhaps we could even say that our public motives have a social existence apart from the private ones lodged in the recesses of our minds; and perhaps those social ones can be adjusted retrospectively because, to put it bluntly, they are really a species of fiction.

Otherwise I don’t see how we can get more than an epistemic change. I’ve just realised that I really kind of feel like some cheese…

The Stoppard Problem

Tom StoppardIt was exciting to hear that Tom Stoppard’s new play was going to be called The Hard Problem, although until it opened recently details were scarce. In the event the reviews have not been very good. It could easily have been that the pieces in the mainstream newspapers missed the point in some way; unfortunately, Vaughan Bell of Mind Hacks  didn’t like the way the intellectual issues were handled either (though he had an entertaining evening); and he’s a very sensible and well-informed commentator on consciousness and the mind. So, a disappointing late entry in a distinguished playwright’s record?

I haven’t seen it yet, but I’ve read the script, which in some ways is better for our current purposes. No-one, of course, supposed that Stoppard was going to present a live solution to the Hard Problem: but in the event the play is barely about that problem at all. The Problem’s chief role is to help Hilary, our heroine, get a job at the Krohl Institute for Brain Science, an organisation set up by the wealthy financier Jerry Krohl. Most of the Krohl’s work is on ‘hard’ neuroscience and reductive, materialist projects, but Leo, the head of the department Hilary joins, happens to think the Hard Problem is central. Merely mentioning it is enough to clinch the job, and that’s about it; the chief concern of the rest of the research we’re told about is altruism, and the Prisoner’s Dilemma.

The strange thing is that within philosophy the Hard Problem must be the most fictionalised issue ever. The wealth of thought experiments, elaborate examples and complicated counterfactuals provides enough stories to furnish the complete folklore of a small country. Mary the colour scientist, the zombies, the bats, Twin Earth, chip-head, the qualia that dance and the qualia that fade like Tolkienish elves; as an author you’d want to make something out of all that, wouldn’t you? Or perhaps that assumption just helps explain why I’m not a successful playwright. Of course, you’d only use that stuff if you really wanted to write about the Hard Problem, and Stoppard, it seems, doesn’t really. Perhaps he should just have picked a different title; Every Good Girl Deserves to Know What Became of Her Kid?

Hilary, in fact, had a daughter as a teenager who she gave up for adoption, and who she has worried about ever since. She believes in God because she needs someone effective to pray to about it; and presumably she believes in altruism so someone can be altruistic towards her daughter; though if the sceptic’s arguments are sound, self-interest would work, too.

The debate about altruism is one of those too-well-trodden paths in philosophy; more or less anything you say feels as if it has been in a thousand mouths already. I often feel there’s an element of misunderstanding between those who defend the concept of altruism and those who would reduce it to selfish genery. Yes, the way people behave tends to be consistent with their own survival and reproduction; but that hardly exhausts the topic; we want to know how the actual reasons, emotions, and social conventions work. It’s sort of as though I remarked on how extraordinary it is that a forest pumps so much water way above the ground.

“There’s no pump, Peter,” says BitBucket; “that’s kind of a naive view. See, the tree needs the water in its leaves to survive, so it has evolved as a water-having organism. There are no little hamadryads planning it all out and working tiny pumps. No water magic.”

“But there’s like, capillarity, or something, isn’t there? Um, osmosis? Xylem and phloem? Turgid vacuoles?”

“Sure, but those things are completely explained by the evolutionary imperatives. Saying there are vacuoles doesn’t tell us why there are vacuoles or why they are what they really are.”

“I don’t think osmosis is completely explained by evolution. And surely the biological pumping system is, you know, worth discussing in itself?”

“There’s no pump, Peter!”

Stoppard seems to want to say that greedy reductionism throws out the baby with the bath water. Hilary’s critique of the Prisoner’s Dilemma is that it lacks all context, all the human background that actually informs our behaviour; what’s the relationship of the two prisoners? When the plot puts her into an analogous dilemma, she sacrifices her own interests and career, and is forced to suffer the humiliation of being left with nothing to study but philosophy. In parallel the financial world that pays for the Krohl is going through convulsions because it relied on computational models which were also too reductionist; it turns out that the market thinks and feels and reacts in ways that aren’t determined by rational game theory.

That point is possibly a little undercut by the fact that a reductionist actually foresaw the crash. Amal, who lost out by not rating the Hard Problem high enough, nevertheless manages to fathom the market problem ahead of time…

The market is acting stupid, and the models are out of whack because we don’t know how to build a stupid computer.

But perhaps we are to suppose that he’s learnt his lesson and is ready to talk turgid vacuoles with us sloppy thinkers.

I certainly plan to go and see a performance, and if new light dawns as a result, I’ll let you know.

 

Minimising Free Energy

A real wealth of papers at the OpenMind site, presided over by Thomas Metzinger, including new stuff from Dan Dennett, Ned Block, Paul Churchland, Alva Noë, Andy Clark and many others. Call me perverse, but the one that attracted my attention first is the paper The Neural Organ Explains the Mind by Jakob Hohwy. This expounds the free energy theory put forward by Karl Friston.

The hypothesis here is that we should view the brain as an organ of the body in the same way as we regard the heart or the liver. Those other organs have a distinctive function – in the case of the heart, it pumps blood  –  and what we need to do is recognise what the brain does. The suggestion is that it minimises free energy; that honestly doesn’t mean much to me, but apparently another way of putting it is to say that the brain’s function is to keep the organism within a limited set of states. If the organism is a fish, the brain aims to keep it in the right kind of water, keeps it fed and with adequate oxygen, and so on.

It’s always good to get back to some commonsensical. pragmatic view, and the paper shows that this is a fertile and flexible hypothesis, which yields explanations for various kinds of behaviour. There seem to me to be three prima facie objections. First, this isn’t really  a function akin to the heart pumping blood; at best it’s a high level meta-function. The heart does blood pumping, the lungs do respiration, the gut does digestion; and the brain apparently keeps the organism in  conditions where blood can go on being pumped, there is still breathable air available, food to be digested, and so on. In fact it oversees every other function and in completely novel circumstances it suddenly acquires new functions: if we go hang-gliding, the brain learns to keep us flying straight and level, not something it ever had to do in the earlier history of the human race.  Now of course, if we confront the gut with a substance it never experienced before, it will probably deal with it one way or another; but it will only deploy the chemical functions it always had; it won’t learn new ones. There’s a protean quality about the brain that eludes simple comparisons with other organs.

A second problem is that the hypothesis suggests the brain is all about keeping the organism in  states where it is comfortable, whereas the human brain at least seems to be able to take into account future contingencies and make long-term plans which enable us to abandon the ideal environment of our beds each morning and go out into the cold and rain. There is a theoretical answer to this problem which seems to involve us being able to perceive  things across space and time; probably right, but that seems like a whole new function rather then something that drops out naturally from minimising free energy; I may not have understood this bit correctly. It seems that when we move our hand, it may happen because we have, in contradiction of the evidence, adopted the belief that our hand is already moving; this belief serves to minimise free energy and our belief that the hand is moving causes the actual movement we believe in.

Third and worse, the brain often seems to impel us to do things that are risky, uncomfortable, and damaging, and not necessarily in pursuit of keeping our states in line with comfort, even in the long term. Why do we go hang-gliding, why do we take drugs, climb mountains, enter a monastery or convent? I know there are plenty of answers in terms of self-interest, but it’s much less clear to me that there are answers in terms of minimising free energy.

That’s all very negative, but actually the whole idea overall strikes me as at least a novel and interesting perspective. Hohwy draws some parallels with the theory of evolution; like Darwin’s idea, this is a very general theory with alarmingly large claims, and critics may well say that it’s either over-ambitious or that in the end it explains too much too easily; that it is, ultimately, unfalsifiable.

I wouldn’t go that far; it seems to me that there are a lot of potential issues, but that the theory is very adaptable and productive in potentially useful ways. It might well be a valuable perspective. I’m less sure that it answers the questions we’re really bothered about. Take the analogy of the gut (as the theory encourages us to do). What is the gut’s function? Actually, we could define it several ways (it deals with food, it makes poop, it helps store energy). One might be that the gut keeps the bloodstream in good condition so far as nutrients are concerned, just as the lungs keep it in good condition in respect of oxygenation. But the gut also, as part of that, does digestion, a complex and fascinating subject which is well worth study in itself. Now it might be that the brain does indeed minimise free energy, and that might be a legitimate field of study; but perhaps in doing so it also supports consciousness, a separate issue which like digestion is well worthy of study in itself.

We might not be looking at final answers, then – to be fair, we’ve only scratched the surface of what seems to be a remarkably fecund hypothesis – but even if we’re not, a strange new idea has got to be welcome.

Conscious Entities down the Pub

tankardAn exciting new development, as Conscious Entities goes live!  Sergio and I are meeting up for a beer and some ontological elucidation on Monday 16 February at 18.00 in the Plough in Bloomsbury, near the British Museum.  This is more or less the site’s eleventh birthday; I forgot to mark the tenth last year.

I know most readers of the site are not in London, but if you are, even if you’ve never commented here, why not join us?

Drop me an email for contact details, on

e-mail

Crimbots

crimbotSome serious moral dialogue about robots recently. Eric Schwitzgebel put forward the idea that we might have special duties in respect of robots, on the model of the duties a parent owes to children, an idea embodied in a story he wrote with Scott Bakker. He followed up with two arguments for robot rights; first, the claim that there is no relevant difference between humans and AIs, second, a Bostromic argument that we could all be sims, and if we are, then again, we’re not different from AIs.

Scott has followed up with a characteristically subtle and bleak case for the idea that we’ll be unable to cope with the whole issue anyway. Our cognitive capacities, designed for shallow information environments, are not even up to understanding ourselves properly; the advent of a whole host of new styles of cognition will radically overwhelm them. It might well be that the revelation of how threadbare our own cognition really is will be a kind of poison pill for philosophy (a well-deserved one on this account, I suppose).

I think it’s a slight mistake to suppose that morality confers a special grade of duty in respect of children. It’s more that parents want to favour their children, and our moral codes are constructed to accommodate that. It’s true society allocates responsibility for children to their parents, but that’s essentially a pragmatic matter rather than a directly moral one. In wartime Britain the state was happy to make random strangers responsible for evacuees, while those who put the interests of society above their own offspring, like Brutus (the original one, not the Caesar stabber) have sometimes been celebrated for it.

What I want to do though, is take up the challenge of showing why robots are indeed relevantly different to human beings, and not moral agents. I’m addressing only one kind of robot, the kind whose mind is provided by the running of a program on a digital computer (I know, John Searle would be turning in his grave if he wasn’t still alive, but bear with me). I will offer two related points, and the first is that such robots suffer grave problems over identity. They don’t really have personal identity, and without that they can’t be moral agents.

Suppose Crimbot 1 has done a bad thing; we power him down, download his current state, wipe the memory in his original head, and upload him into a fresh robot body of identical design.

“Oops, I confess!” he says. Do we hold him responsible; do we punish him? Surely the transfer to a new body makes no difference? It must be the program state that carries the responsibility; we surely wouldn’t punish the body that committed the crime. It’s now running the Saintbot program, which never did anything wrong.

But then neither did the copy of Crimbot 1 software which is now running in a different body – because it’s a copy, not the original. We could upload as many copies of that as we wanted; would they all deserve punishment for something only one robot actually did?

Maybe we would fall back on the idea that for moral responsibility it has to be the same copy in the same body? By downloading and wiping we destroyed the person who was guilty and merely created an innocent copy? Crimbot 1 in the new body smirks at that idea.

Suppose we had uploaded the copy back into the same body? Crimbot 1 is now identical, program and body, the same as if we had merely switched him off for a minute. Does the brief interval when his data registers had different values make such a moral difference? What if he downloaded himself to an internal store, so that those values were always kept within the original body? What if he does that routinely every three seconds? Does that mean he is no longer responsible for anything, (unless we catch him really quickly) while a version that doesn’t do the regular transfer of values can be punished?

We could have Crimbot 2 and Crimbot 3; 2 downloads himself to internal data storage every second and the immediately uploads himself again. 3 merely pauses every second for the length of time that operation takes. Their behaviour is identical, the reasons for it are identical; how can we say that 2 is innocent while 3 is guilty?

But then, as the second point, surely none of them is guilty of anything? Whatever may be true of human beings, we know for sure that Crimbot 1 had no choice over what to do; his behaviour was absolutely determined by the program. If we copy him into another body, and set him uip wioth the same circumstances, he’ll do the same things. We might as well punish him in advance; all copies of the Crimbot program deserve punishment because the only thing that prevented them from committing the crime would be circumstances.

Now, we might accept all that and suggest that the same problems apply to human beings. If you downloaded and uploaded us, you could create the same issues; if we knew enough about ourselves our behaviour would be fully predictable too!

The difference is that in Crimbot the distinction between program and body is clear because he is an artefact, and he has been designed to work in certain ways. We were not designed, and we do not come in the form of a neat layer of software which can be peeled off the hardware. The human brain is unbelievably detailed, and no part of it is irrelevant. The position of a single molecule in a neuron, or even in the supporting astrocytes, may make the difference between firing and not firing, and one neuron firing can be decisive in our behaviour. Whereas Crimbot’s behaviour comes from a limited set of carefully designed functional properties, ours comes from the minute specifics of who we are. Crimbot embodies an abstraction, he’s actually designed to conform as closely as possible to design and program specs; we’re unresolvably particular and specific.

Couldn’t that, or something like that, be the relevant difference?