CamembertCan you change your mind after the deed is done? Ezequiel Di Paolo thinks you can, sometimes. More specifically, he believes that acts can become intentional after they have already been performed. His theory, which seems to imply a kind of time travel, is set out in a paper in the latest JCS.

I think the normal view would be that for an act to be intentional, it must have been caused by a conscious decision on your part. Since causes come before effects, the conscious decision must have happened beforehand, and any thoughts you may have afterwards are irrelevant. There is a blurry borderline over what is conscious, of course; if you were confused or inattentive, if you were ‘on autopilot’ or you were following a hunch or a whim it may not be completely clear how consciously your action was considered.

There can, moreover, be what Di Paolo calls an epistemic change. In such a case the action was always intentional in fact, but you only realise that it was when you think about your own motives more carefully after the event. Perhaps you act in the heat of the moment without reflection; but when you think about it you realise that in fact what you did was in line with your plans and actually caused by them. Although this kind of thing raises a few issues, it is not deeply problematic in the same way as a real change. Di Paolo calls the real change an ontological one; here you definitely did not intend the action beforehand, but it becomes intentional retrospectively.

That seems disastrous on the face of it. If the intentionality of an act can change once, it can presumably change again, so it seems all intentions must become provisional and unreliable; the whole concept of responsibility looks in danger of being undermined. Luckily, Di Paolo believes that changes can only occur in very particular circumstances, and in such a way that only one revision can occur.

His view founds intentions in enactment rather than in linear causation; he has them arising in social interaction. The theory draws on Husserl and Heidegger, but probably the easiest way to get a sense of it is to consider the examples presented by Di Paolo. The first is from De Jaegher and centres, in fittingly continental style, around a cheese board.

De Jaegher is slicing himself a corner of Camembert and notices that his companion is watching in a way which suggests that he too, would like to eat cheese. DJ cuts him a slice and hands it over.
“I could see you wanted some cheese,” he remarks.
“Funny thing, that,” he replies, “actually, I wasn’t wanting cheese until you handed it to me; at that moment the desire crystallised and I now found I had been wanting cheese.”

In a corner of the room, Alice is tired of the party to do; the people are boring and the magnificent cheese board is being monopolised by philosophers enacting around it. She looks across at her husband and happens to scratch her wrist. He comes over.
“Saw you point at your watch,” he says, “yeah, we probably should go now. We’ve got the Stompers’ do to go to.”
Alice now realises that although she didn’t mean to point to her watch originally, she now feels the earlier intention is in place after all – she did mean to suggest they went.

At the Stompers’ there is dancing; the tango! Alice and Bill are really good, and as they dance Bill finds that his moves are being read and interpreted by Alice superbly; she conforms and shapes to match him before he has actually decided what to do; yet she has read him correctly and he realises that after the fact his intentions really were the ones she divined. (I sort of melded the examples.)

You see how it works? No, it doesn’t really convince me either. It is a viable way of looking at things, but it doesn’t compel us to agree that there was a real change of earlier intention. Around the cheese board there may always have been prior hunger, but I don’t see why we’d say the intention existed before accepting the cheese.

It is true, of course, that human beings are very inclined to confabulate, to make up stories about themselves that make their behaviour make sense, even if that involves some retrospective monkeying with the facts. It might well be that social pressure is a particularly potent source of this kind of thing; we adjust our motivations to fit with what the people around us would like to hear. In a loose sense, perhaps we could even say that our public motives have a social existence apart from the private ones lodged in the recesses of our minds; and perhaps those social ones can be adjusted retrospectively because, to put it bluntly, they are really a species of fiction.

Otherwise I don’t see how we can get more than an epistemic change. I’ve just realised that I really kind of feel like some cheese…

Tom StoppardIt was exciting to hear that Tom Stoppard’s new play was going to be called The Hard Problem, although until it opened recently details were scarce. In the event the reviews have not been very good. It could easily have been that the pieces in the mainstream newspapers missed the point in some way; unfortunately, Vaughan Bell of Mind Hacks  didn’t like the way the intellectual issues were handled either (though he had an entertaining evening); and he’s a very sensible and well-informed commentator on consciousness and the mind. So, a disappointing late entry in a distinguished playwright’s record?

I haven’t seen it yet, but I’ve read the script, which in some ways is better for our current purposes. No-one, of course, supposed that Stoppard was going to present a live solution to the Hard Problem: but in the event the play is barely about that problem at all. The Problem’s chief role is to help Hilary, our heroine, get a job at the Krohl Institute for Brain Science, an organisation set up by the wealthy financier Jerry Krohl. Most of the Krohl’s work is on ‘hard’ neuroscience and reductive, materialist projects, but Leo, the head of the department Hilary joins, happens to think the Hard Problem is central. Merely mentioning it is enough to clinch the job, and that’s about it; the chief concern of the rest of the research we’re told about is altruism, and the Prisoner’s Dilemma.

The strange thing is that within philosophy the Hard Problem must be the most fictionalised issue ever. The wealth of thought experiments, elaborate examples and complicated counterfactuals provides enough stories to furnish the complete folklore of a small country. Mary the colour scientist, the zombies, the bats, Twin Earth, chip-head, the qualia that dance and the qualia that fade like Tolkienish elves; as an author you’d want to make something out of all that, wouldn’t you? Or perhaps that assumption just helps explain why I’m not a successful playwright. Of course, you’d only use that stuff if you really wanted to write about the Hard Problem, and Stoppard, it seems, doesn’t really. Perhaps he should just have picked a different title; Every Good Girl Deserves to Know What Became of Her Kid?

Hilary, in fact, had a daughter as a teenager who she gave up for adoption, and who she has worried about ever since. She believes in God because she needs someone effective to pray to about it; and presumably she believes in altruism so someone can be altruistic towards her daughter; though if the sceptic’s arguments are sound, self-interest would work, too.

The debate about altruism is one of those too-well-trodden paths in philosophy; more or less anything you say feels as if it has been in a thousand mouths already. I often feel there’s an element of misunderstanding between those who defend the concept of altruism and those who would reduce it to selfish genery. Yes, the way people behave tends to be consistent with their own survival and reproduction; but that hardly exhausts the topic; we want to know how the actual reasons, emotions, and social conventions work. It’s sort of as though I remarked on how extraordinary it is that a forest pumps so much water way above the ground.

“There’s no pump, Peter,” says BitBucket; “that’s kind of a naive view. See, the tree needs the water in its leaves to survive, so it has evolved as a water-having organism. There are no little hamadryads planning it all out and working tiny pumps. No water magic.”

“But there’s like, capillarity, or something, isn’t there? Um, osmosis? Xylem and phloem? Turgid vacuoles?”

“Sure, but those things are completely explained by the evolutionary imperatives. Saying there are vacuoles doesn’t tell us why there are vacuoles or why they are what they really are.”

“I don’t think osmosis is completely explained by evolution. And surely the biological pumping system is, you know, worth discussing in itself?”

“There’s no pump, Peter!”

Stoppard seems to want to say that greedy reductionism throws out the baby with the bath water. Hilary’s critique of the Prisoner’s Dilemma is that it lacks all context, all the human background that actually informs our behaviour; what’s the relationship of the two prisoners? When the plot puts her into an analogous dilemma, she sacrifices her own interests and career, and is forced to suffer the humiliation of being left with nothing to study but philosophy. In parallel the financial world that pays for the Krohl is going through convulsions because it relied on computational models which were also too reductionist; it turns out that the market thinks and feels and reacts in ways that aren’t determined by rational game theory.

That point is possibly a little undercut by the fact that a reductionist actually foresaw the crash. Amal, who lost out by not rating the Hard Problem high enough, nevertheless manages to fathom the market problem ahead of time…

The market is acting stupid, and the models are out of whack because we don’t know how to build a stupid computer.

But perhaps we are to suppose that he’s learnt his lesson and is ready to talk turgid vacuoles with us sloppy thinkers.

I certainly plan to go and see a performance, and if new light dawns as a result, I’ll let you know.

 

Mr BurnsA real wealth of papers at the OpenMind site, presided over by Thomas Metzinger, including new stuff from Dan Dennett, Ned Block, Paul Churchland, Alva Noë, Andy Clark and many others. Call me perverse, but the one that attracted my attention first is the paper The Neural Organ Explains the Mind by Jakob Hohwy. This expounds the free energy theory put forward by Karl Friston.

The hypothesis here is that we should view the brain as an organ of the body in the same way as we regard the heart or the liver. Those other organs have a distinctive function – in the case of the heart, it pumps blood  –  and what we need to do is recognise what the brain does. The suggestion is that it minimises free energy; that honestly doesn’t mean much to me, but apparently another way of putting it is to say that the brain’s function is to keep the organism within a limited set of states. If the organism is a fish, the brain aims to keep it in the right kind of water, keeps it fed and with adequate oxygen, and so on.

It’s always good to get back to some commonsensical. pragmatic view, and the paper shows that this is a fertile and flexible hypothesis, which yields explanations for various kinds of behaviour. There seem to me to be three prima facie objections. First, this isn’t really  a function akin to the heart pumping blood; at best it’s a high level meta-function. The heart does blood pumping, the lungs do respiration, the gut does digestion; and the brain apparently keeps the organism in  conditions where blood can go on being pumped, there is still breathable air available, food to be digested, and so on. In fact it oversees every other function and in completely novel circumstances it suddenly acquires new functions: if we go hang-gliding, the brain learns to keep us flying straight and level, not something it ever had to do in the earlier history of the human race.  Now of course, if we confront the gut with a substance it never experienced before, it will probably deal with it one way or another; but it will only deploy the chemical functions it always had; it won’t learn new ones. There’s a protean quality about the brain that eludes simple comparisons with other organs.

A second problem is that the hypothesis suggests the brain is all about keeping the organism in  states where it is comfortable, whereas the human brain at least seems to be able to take into account future contingencies and make long-term plans which enable us to abandon the ideal environment of our beds each morning and go out into the cold and rain. There is a theoretical answer to this problem which seems to involve us being able to perceive  things across space and time; probably right, but that seems like a whole new function rather then something that drops out naturally from minimising free energy; I may not have understood this bit correctly. It seems that when we move our hand, it may happen because we have, in contradiction of the evidence, adopted the belief that our hand is already moving; this belief serves to minimise free energy and our belief that the hand is moving causes the actual movement we believe in.

Third and worse, the brain often seems to impel us to do things that are risky, uncomfortable, and damaging, and not necessarily in pursuit of keeping our states in line with comfort, even in the long term. Why do we go hang-gliding, why do we take drugs, climb mountains, enter a monastery or convent? I know there are plenty of answers in terms of self-interest, but it’s much less clear to me that there are answers in terms of minimising free energy.

That’s all very negative, but actually the whole idea overall strikes me as at least a novel and interesting perspective. Hohwy draws some parallels with the theory of evolution; like Darwin’s idea, this is a very general theory with alarmingly large claims, and critics may well say that it’s either over-ambitious or that in the end it explains too much too easily; that it is, ultimately, unfalsifiable.

I wouldn’t go that far; it seems to me that there are a lot of potential issues, but that the theory is very adaptable and productive in potentially useful ways. It might well be a valuable perspective. I’m less sure that it answers the questions we’re really bothered about. Take the analogy of the gut (as the theory encourages us to do). What is the gut’s function? Actually, we could define it several ways (it deals with food, it makes poop, it helps store energy). One might be that the gut keeps the bloodstream in good condition so far as nutrients are concerned, just as the lungs keep it in good condition in respect of oxygenation. But the gut also, as part of that, does digestion, a complex and fascinating subject which is well worth study in itself. Now it might be that the brain does indeed minimise free energy, and that might be a legitimate field of study; but perhaps in doing so it also supports consciousness, a separate issue which like digestion is well worthy of study in itself.

We might not be looking at final answers, then – to be fair, we’ve only scratched the surface of what seems to be a remarkably fecund hypothesis – but even if we’re not, a strange new idea has got to be welcome.

tankardAn exciting new development, as Conscious Entities goes live!  Sergio and I are meeting up for a beer and some ontological elucidation on Monday 16 February at 18.00 in the Plough in Bloomsbury, near the British Museum.  This is more or less the site’s eleventh birthday; I forgot to mark the tenth last year.

I know most readers of the site are not in London, but if you are, even if you’ve never commented here, why not join us?

Drop me an email for contact details, on

e-mail

crimbotSome serious moral dialogue about robots recently. Eric Schwitzgebel put forward the idea that we might have special duties in respect of robots, on the model of the duties a parent owes to children, an idea embodied in a story he wrote with Scott Bakker. He followed up with two arguments for robot rights; first, the claim that there is no relevant difference between humans and AIs, second, a Bostromic argument that we could all be sims, and if we are, then again, we’re not different from AIs.

Scott has followed up with a characteristically subtle and bleak case for the idea that we’ll be unable to cope with the whole issue anyway. Our cognitive capacities, designed for shallow information environments, are not even up to understanding ourselves properly; the advent of a whole host of new styles of cognition will radically overwhelm them. It might well be that the revelation of how threadbare our own cognition really is will be a kind of poison pill for philosophy (a well-deserved one on this account, I suppose).

I think it’s a slight mistake to suppose that morality confers a special grade of duty in respect of children. It’s more that parents want to favour their children, and our moral codes are constructed to accommodate that. It’s true society allocates responsibility for children to their parents, but that’s essentially a pragmatic matter rather than a directly moral one. In wartime Britain the state was happy to make random strangers responsible for evacuees, while those who put the interests of society above their own offspring, like Brutus (the original one, not the Caesar stabber) have sometimes been celebrated for it.

What I want to do though, is take up the challenge of showing why robots are indeed relevantly different to human beings, and not moral agents. I’m addressing only one kind of robot, the kind whose mind is provided by the running of a program on a digital computer (I know, John Searle would be turning in his grave if he wasn’t still alive, but bear with me). I will offer two related points, and the first is that such robots suffer grave problems over identity. They don’t really have personal identity, and without that they can’t be moral agents.

Suppose Crimbot 1 has done a bad thing; we power him down, download his current state, wipe the memory in his original head, and upload him into a fresh robot body of identical design.

“Oops, I confess!” he says. Do we hold him responsible; do we punish him? Surely the transfer to a new body makes no difference? It must be the program state that carries the responsibility; we surely wouldn’t punish the body that committed the crime. It’s now running the Saintbot program, which never did anything wrong.

But then neither did the copy of Crimbot 1 software which is now running in a different body – because it’s a copy, not the original. We could upload as many copies of that as we wanted; would they all deserve punishment for something only one robot actually did?

Maybe we would fall back on the idea that for moral responsibility it has to be the same copy in the same body? By downloading and wiping we destroyed the person who was guilty and merely created an innocent copy? Crimbot 1 in the new body smirks at that idea.

Suppose we had uploaded the copy back into the same body? Crimbot 1 is now identical, program and body, the same as if we had merely switched him off for a minute. Does the brief interval when his data registers had different values make such a moral difference? What if he downloaded himself to an internal store, so that those values were always kept within the original body? What if he does that routinely every three seconds? Does that mean he is no longer responsible for anything, (unless we catch him really quickly) while a version that doesn’t do the regular transfer of values can be punished?

We could have Crimbot 2 and Crimbot 3; 2 downloads himself to internal data storage every second and the immediately uploads himself again. 3 merely pauses every second for the length of time that operation takes. Their behaviour is identical, the reasons for it are identical; how can we say that 2 is innocent while 3 is guilty?

But then, as the second point, surely none of them is guilty of anything? Whatever may be true of human beings, we know for sure that Crimbot 1 had no choice over what to do; his behaviour was absolutely determined by the program. If we copy him into another body, and set him uip wioth the same circumstances, he’ll do the same things. We might as well punish him in advance; all copies of the Crimbot program deserve punishment because the only thing that prevented them from committing the crime would be circumstances.

Now, we might accept all that and suggest that the same problems apply to human beings. If you downloaded and uploaded us, you could create the same issues; if we knew enough about ourselves our behaviour would be fully predictable too!

The difference is that in Crimbot the distinction between program and body is clear because he is an artefact, and he has been designed to work in certain ways. We were not designed, and we do not come in the form of a neat layer of software which can be peeled off the hardware. The human brain is unbelievably detailed, and no part of it is irrelevant. The position of a single molecule in a neuron, or even in the supporting astrocytes, may make the difference between firing and not firing, and one neuron firing can be decisive in our behaviour. Whereas Crimbot’s behaviour comes from a limited set of carefully designed functional properties, ours comes from the minute specifics of who we are. Crimbot embodies an abstraction, he’s actually designed to conform as closely as possible to design and program specs; we’re unresolvably particular and specific.

Couldn’t that, or something like that, be the relevant difference?

datingWhy can’t we solve the problem of consciousness? That is the question asked by a recent Guardian piece.  The account given there is not bad at all; excellent by journalistic standards, although I think it probably overstates the significance of Francis Crick’s intervention.  His book was well worth reading, but in spite of the title his hypothesis had ceased to be astonishing quite a while before. Surely also a little odd to have Colin McGinn named only as Ted Honderich’s adversary when his own Mysterian views are so much more widely cited. Still the piece makes a good point; lots of Davids and not a few Samsons have gone up against this particular Goliath, yet the giant is still on his feet.

Well, if several decades of great minds can’t do the job, why not throw a few dozen more at it? The Edge, in its annual question this year, asks its strike force of intellectuals to tackle the question: What do you think about machines that think? This evoked no fewer than 186 responses. Some of the respondents are old hands at the consciousness game, notably Dan Dennett; we must also tip our hat to our friend Arnold Trehub, who briefly denounces the idea that artefactual machines can think. It’s certainly true, in my own opinion, that we are nowhere near thinkng machines, and in fact it’s not clear that we are getting materially closer: what we have got is splendid machines that clearly don’t think at all but are increasingly good at doing tasks we previously believed needed thought. You could argue that eliminating the need for thought was Babbage’s project right from the beginning, and we know that Turing discarded the question ‘Can machines think?’ as not worthy of an answer.

186 answers is of course, at least 185 more than we really wanted, and those are not good odds of getting even a congenial analysis. In fact, the rapid succession of views, some well-informed, others perhaps shooting from the hip to a degree, is rather exhausting: the effect is like a dreadfully prolonged session of speed dating: like my theory? No? Well don’t worry, there are 180 more on the way immediately. It is sort of fun to surf the wave of punditry, but I’d be surprised to hear that many people were still with the programme when it got to view number 186 (which, despairingly or perhaps refreshingly, is a picture).

Honestly. though, why can’t we solve the problem of consciousness? Could it be that there is something fundamentally wrong? Colin McGinn, of course, argues that we can never understand consciousness because of cognitive closure; there’s no real mystery about it, but our mental toolset just doesn’t allow us to get to the answer.  McGinn makes a good case, but I think that human cognition is not formal enough to be affected by a closure of this kind; and if it were, I think we should most likely remain blissfully unaware of it: if we were unable to understand consciousness, we shouldn’t see any problem with it either.

Perhaps, though, the whole idea of consciousness as conceived in contemporary Western thought is just wrong? It does seem to be the case that non-European schools of philosophy construe the world in ways that mean a problem of consciousness never really arises. For that matter, the ancient Greeks and Romans did not really see the problem the way we do: although ancient philosophers discussed the soul and personal identity, they didn’t really worry about consciousness. Commonly people blame Western dualism for drawing too sharp a division between the world of the mind and the world of material objects: and the finger is usually pointed at Descartes in particular. Perhaps if we stopped thinking about a physical world and a non-physical mind the alleged problem would simply evaporate. If we thought of a world constituted by pure experience, not differentiated into two worlds, everything would seem perfectly natural?

Perhaps, but it’s not a trick I can pull off myself. I’m sure it’s true our thinking on this has changed over the years, and that the advent of computers, for example, meant that consciousness, and phenomenal consciousness in particular, became more salient than before. Consciousness provided the extra thing computers hadn’t got, answering our intuitive needs and itself being somewhat reshaped to fill the role.  William James, as we know, thought the idea was already on the way out in 1904: “A mere echo, the faint rumour left behind by the disappearing ‘soul’ upon the air of philosophy”; but over a hundred years later it still stands as one of the great enigmas.

Still, maybe if we send in another 200 intellectuals…?

BISASusan Schneider’s recent paper argues that when we hear from alien civilisations, it’s almost bound to be super intelligent robots getting in touch, rather than little green men. She builds on Nick Bostrom’s much-discussed argument that we’re all living in a simulation.

Actually, Bostrom’s argument is more cautious than that, and more carefully framed. His claim is that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation.

So that if we disbelieve the first two, we must accept the third.

In fact there are plenty of reasons to argue that the first two propositions are true. The first evokes ideas of nuclear catastrophe or an unexpected comet wiping us out in our prime, but equally it could just be that no post human stage is ever reached. We only know about the cultures of our own planet, but two of the longest lived – the Egyptian and the Chinese – were very stable, showing few signs of moving on towards post humanism. They made the odd technological advance, but they also let things slip: no more pyramids after the Old Kingdom; ocean-going junks abandoned before being fully exploited. Really only our current Western culture, stemming from the European Renaissance, has displayed a long run of consistent innovation; it may well be a weird anomaly and its five-hundred year momentum may well be temporary. Maybe our descendants will never go much further than we already have; maybe, thinking of Schneider’s case, the stars are basically inhabited by Ancient Egyptians who have been living comfortably for millions of years without ever discovering electricity.

The second proposition requires some very debatable assumptions, notably that consciousness is computable. But the notion of “simulation” also needs examination. Bostrom takes it that a computer simulation of consciousness is likely to be conscious, but I don’t think we’d assume a digital simulation of digestion would do actual digesting. The thing about a simulation is that by definition it leaves out certain aspects of the real phenomenon (otherwise it’s the phenomenon itself, not a simulation). Computer simulations normally leave out material reality, which could be a problem if we want real consciousness. Maybe it doesn’t matter for consciousness; Schneider argues strongly against any kind of biological requirement and it may well be that functional relations will do in the case of consciousness. There’s another issue, though; consciousness may be uniquely immune from simulation because of its strange epistemological greediness. What do I mean? Well, for a simulation of digestion we can write a list of all the entities to be dealt with – the foods we expect to enter the gut and their main components. It’s not an unmanageable task, and if we like we can leave out some items or some classes of item without thereby invalidating the simulation. Can we write a list of the possible contents of consciousness? No. I can think about any damn thing I like, including fictional and logically impossible entities. Can we work with a reduced set of mental contents? No; this ability to think about anything is of the essence.

All this gets much worse when Bostrom floats the idea that future ancestor simulations might themselves go on to be post human and run their own nested simulations, and so on. We must remember that he is really talking about simulated worlds, because his simulated ancestors need to have all the right inputs fed to them consistently. A simulated world has to be significantly smaller in information terms than the world that contains it; there isn’t going to be room within it to simulate the same world again at the same level of detail. Something has to give.

Without the indefinite nesting, though, there’s no good reason to suppose the simulated ancestors will ever outnumber the real people who ever lived in the real world. I suppose Bostrom thinks of his simulated people as taking up negligible space and running at speeds far beyond real life; but when you’re simulating everything, that starts to be questionable. The human brain may be the smallest and most economic way of doing what the human brain does.

Schneider argues that, given the same Whiggish optimism about human progress we mentioned earlier, we must assume that in due course fleshy humans will be superseded by faster and more capable silicon beings, either because robots have taken over the reins or because humans have gradually cyborgised themselves to the point where they are essentially super intelligent robots. Since these post human beings will live on for billions of years, it’s almost certain that when we make contact with aliens, that will be the kind we meet.

She is, curiously, uncertain about whether these beings will be conscious. She really means that they might be zombies, without phenomenal consciousness. I don’t really see how super intelligent beings like that could be without what Ned Block called access consciousness, the kind that allows us to solve problems, make plans, and generally think about stuff; I think Schneider would agree, although she tends to speak as though phenomenal, experiential consciousness was the only kind.

She concludes, reasonably enough, that the alien robots most likely will have full conscious experience. Moreover, because reverse engineering biological brains is probably the quick way to consciousness, she thinks that a particular kind of super intelligent AI is likely to predominate: biologically inspired superintelligent alien (BISA). She argues that although BISAs might in the end be incomprehensible, we can draw some tentative conclusions about BISA minds:
(i). Learning about the computational structure of the brain of the species that created the BISA can provide insight into the BISAs thinking patterns.
(ii) BISAs may have viewpoint invariant representations. (Surely they wouldn’t be very bright if they didn’t?)
(iii) BISAs will have language-like mental representations that are recursive and combinatorial. (Ditto.)
(iv) BISAs may have one or more global workspaces. (If you believe in global workspace theory, certainly. Why more than one, though – doesn’t that defeat the object? Global workspaces are useful because they’re global.)
(v) A BISA’s mental processing can be understood via functional decomposition.

I’ll throw in a strange one; I doubt whether BISAs would have identity, at least not the way we do. They would be computational processes in silicon: they could split, duplicate, and merge without difficulty. They could be copied exactly, so that the question of whether BISA x was the same as BISA y could become meaningless. For them, in fact, communicating and merging would differ only in degree. Something to bear in mind for that first contact, perhaps.

This is interesting stuff, but to me it’s slightly surprising to see it going on in philosophy departments; does this represent an unexpected revival of the belief that armchair reasoning can tell us important truths about the world?

robot illusionsNeural networks really seem to be going places recently. Last time I mentioned their use in sophisticated translation software, but they’re also steaming ahead with new successes in recognition of visual images. Recently there was a claim from MIT that the latest systems were catching up with primate brains at last. Also from MIT (also via MLU) though, has come an intriguing study into what we could call optical illusions for robots, which cause the systems to make mistakes which are incomprehensible to us primates. The graphics in the grid on the right apparently look like a selection of digits between one and six in the eyes of these recognition systems. Nobody really knows why, because of course neural networks are trained, not programmed, and develop their own inscrutable methods.

How then, if we don’t understand, could we ever create such illusions? Optical illusions for human beings exploit known methods of visual analysis used by the brain, but if we don’t know what method a neural network is using, we seem to be stymied. What the research team did is use one of their systems in reverse, getting it to create images instead of analysing them. These were then evaluated by a similar system and refined through several iterations until they were accepted with a very high level of certainty.

This seems quite peculiar and the first impression is that it rather seriously undermines our faith in the reliability of neural network systems. However, there’s one important caveat to take into account: the networks in question are ‘used to’ dealing with images in which the crucial part to be identified is small in relation to the whole. They are happy ignoring almost all of the image. So to achieve a fair comparison with human recognition we should perhaps think of the question being not ‘do these look like numbers to you?’ and more like ‘can you find one of the digits from one to six hidden somewhere in this image?’. On that basis the results seem easier to understand.

There still seem to be some interesting implications, though. The first is that, as with language, AI systems are achieving success with methods that do not much resemble those used by the human brain. There’s an irony in this happening with neural networks, because in the old dispute between GOFAI and networks it was the network people who were trying to follow a biological design, at least in outline.  The opposition wanted to treat cognition as a pure engineering problem; define what we need, identify the best way to deliver it, and don’t worry about copying the brain. This is the school of thought that likes to point out that we didn’t achieve flight be making machines with flapping, feathery wings. Early network theory, going right back to McCulloch and Pitts, held that we were better off designing something that looked at least broadly like the neurons in the brain. In fact, of course, the resemblance has never been that close, and the focus has generally been more on results than on replicating the structures and systems of biological brains; you could argue that modern neural networks are no more like the brain than fixed-wing aircraft are to birds (or bats).  At any rate, the prospect of equalling human performance without doing it the human way raises the same nightmare scenario I was talking about last time; robots that are not people but get treated as if they were (and perhaps people being treated like machines as a consequence.

A second issue is whether the deception which these systems fall into points to a general weakness. Could it be that these systems work very well when dealing with ‘ordinary’ images but continue go wildly off the rails when faced with certain kinds of unusual ones – even when being pout to practical use? It’s perhaps not very likely that  system is going to run into the kind of truly bizarre image we seem to be dealing with, but a more realistic concern might be the potential scope for sabotage or subversion on the part of some malefactor.  One safeguard against this possibility is that the images in question were designed by, as it were, sister systems, ones that worked pretty much the same way and presumably shared the same quirks. Without owning one of these systems yourself it might be difficult to devise illusions that worked – unless perhaps there are general illusions that all network systems are more or less equally likely to be fooled by? That doesn’t seem very likely, but it might be an interesting research project.  The other safeguard is that these systems are not likely to be used without some additional safeguards, perhaps even more contextual processing of broadly the kind that the human mind obviously brings to the task.

The third question is – what is it like to be an AI deceived by an illusion? There’s no reason to think that these machines have subjective experience – unless you’re one of those who is prepared to grant a dim glow of awareness to quite simple machines – but what if some cyborg with a human brain, or a future conscious robot, had systems like these as part of its processing apparatus rather than the ones provided by the human brain?  It’s not implausible that the immense plasticity of the human brain would allow the inputs to be translated into normal visual experience, or something like it.  On the whole I think this is the most likely result, although there might be quirks or deficits (or hey, enhancements, why not) in the visual experience.  The second possibility is that the experience would be completely weird and inexpressible and although the cyborg/robot would be able to negotiate the world just fine, its experience would be like nothing we’ve ever had, perhaps like nothing we can imagine.

The third possibility is that it would be like nothing. There would be no experience as such; the data and the knowledge about the surroundings would appear in the cyborg/human’s brain but there would be nothing it was like for that to happen.  This is the answer qualophile scpetice would expect for a pure robot brain, but the cyborg is more worrying. Human beings are supposed to experience qualia, but when do they arise? Is it only after all the visual processing has been done – when the data ariive in the ‘Cartesian Theatre’ which Dennett has often told us does not exist? Is it, instead, in the visual processing modules or at the visual processing stage? If so, then we were wrong to doubt that MIT’s systems are not having experiences. Perhaps the cyborg gets flawed or partial qualia – but what would that even mean..?

 

rosetta stoneMicrosoft recently announced the first public beta preview for Skype Translate, a service which will provide immediate translation during voice calls. For the time being only Spanish/English is working but we’re told that English/German and other languages are on the way. The approach used is complex. Deep Neural Networks apparently play a key role in the speech recognition. While the actual translation  ultimately relies on recognising bits of text which resemble those it already knows, the same basic principle applied in existing text translators such as Google Translate, it is also capable of recognising and removing ‘disfluencies’ –  um and ers, rephrasings, and so on, and apparently makes some use of syntactical models, so there is some highly sophisticated processing going on.  It seems to do a reasonable job, though as always with this kind of thing a degree of scepticism is appropriate.

Translating actual speech, with all its messy variability is of course an amazing achievement, much more difficult than dealing with text (which itself is no walk in the park); and it’s remarkable indeed that it can be done so well without the machine making any serious attempt to deal with the meaning of the words it translates. Perhaps that’s a bit too bald: the software does take account of context and as I said it removes some meaningless bits, so arguably it is not ignoring meaning totally. But full-blown intentionality is completely absent.

This fits into a recent pattern in which barriers to AI are falling to approaches which skirt or avoid consciousness as we normally understand it, and all the intractable problems that go with it.  It’s not exactly the triumph of brute force, but it does owe more to processing power and less to ingenuity than we might have expected. At some point if this continues, we’re going to have to take seriously the possibility of our having, in the not-all-that remote future, a machine which mimics human behaviour brilliantly without our ever having solved any of the philosophical problems. Such a robot might run on something like a revival of the frames or scripts of Marvin Minsky or Roger Schank, only this time with a depth and power behind it that would make the early attempts look like working with an abacus. The AI would, at its crudest, simply be recognising situations and looking up a good response, but it would have such a gigantic library of situations and it would be so subtle at customising the details that its behaviour would be indistinguishable from that of ordinary humans for all practical purposes. What would we say about such a robot (let’s call her Sophia, why not since anthropomorphism seems inevitable). I can see several options.

Option one. Sophia really is conscious, just like us. OK, we don’t really understand how we pulled it off, but it’s futile to argue about it when her performance provides everything we could possibly demand of consciousness and passes every test anyone can devise. We don’t argue that photographs are not depictions because they’re not executed in oil paint, so why would we argue that a consciousness created by other means is not the real thing? She achieved consciousness by a different route, and her brain doesn’t work like ours – but her mind does. In fact, it turns out we probably work more like her than we thought: all this talk of real intrinsic intentionality and magic meaningfulness turns out to be a systematic delusion; we’re really just running scripts ourselves!

Option two. Sophia is conscious, but not in the way we are. OK, the results are indistinguishable, but we just know that the methods are different, and so the process is not the same. birds and bats both fly, but they don’t do it the same way. Sophia probably deserves the same moral rights and duties as us, though we need to be careful about that; but she could very well be a philosophical zombie who has no subjective experience. On the other hand, her mental life might have subjective qualities of its own, very different to ours but incommunicable.

Option three. She’s not not conscious; we just know she isn’t, because we know how she works and we know that all her responses and behaviour come from simply picking canned sequences out of the cupboard. We’re deluding ourselves if we think otherwise. But she is the vivid image of a human being and an incredibly subtle and complex entity: she may not be that different from animals whose behaviour is largely instinctive. We cannot therefore simply treat her as a machine: she probably ought to have some kinds of rights: perhaps special robot rights. Since we can’t be absolutely certain that she does not experience real pain and other feelings in some form, and since she resembles us so much, it’s right to avoid cruelty both on the grounds of the precautionary principle and so as not to risk debasing our own moral instincts; if we got used to doling out bad treatment to robots who cried out with human voices, we might get used to doing it to flesh and blood people too.

Option four.  Sophia’s just an entertaining machine, not conscious at all; but that moral stuff is rubbish. It’s perfectly OK to treat her like a slave, to turn her off when we want, or put her through terrible ‘ordeals’ if it helps or amuses us. We know that inside her head the lights are off, no-one home: we might as well worry about dolls. You talk about debasing our moral instincts, but I don’t think treating puppets like people is a great way to go, morally. You surely wouldn’t switch trolleys to save even ten Sophias if it killed one human being: follow that out to its logical conclusion.

Option five. Sophia is a ghastly parody of human life and should be destroyed immediately. I’m not saying she’s actuated by demonic possession (although Satan is pretty resourceful), but she tempts us into diabolical errors about the unique nature of the human spirit.

No doubt there are other options; for me. at any rate, being obliged to choose one is a nightmare scenario. Merry Christmas!

honderich 3Ted Honderich’s latest work Actual Consciousness is a massive volume. He has always been partial to advancing his argument through a comprehensive review (and rejection) of every other opinion on the subject in question. Here, that approach produces a hefty book which in practice is about the whole field of philosophy of consciousness. There is a useful video here at IAI of Ted grumpily failing to summarise the whole theory in the allotted time and confessing with the same alarming frankness that characterised his autobiography to wanting to be as famous as Bach or Chomsky, and not thinking he was going to be. If you want to see the whole thing you’ll have to sign up (free); but they do have a number of good discussions of consciousness.

The theory Honderich is advancing is a further version of the externalism which we discussed a while ago; that for you to be conscious is in some sense for something to exist (or to be real, hence the ‘actual’ in Actual Consciousness). At first sight this thesis has always seemed opaque to the point of wilful obscurity, and the simplest readings seem to make it either vacuous (for you to be conscious is for a state of consciousness to exist) or just evidently wrong (for you to be conscious is for the object of your awareness to exist). He means – he must mean – something subtler than that, and a few more clues can only be welcome.

First though, we survey the alternatives. Honderich suggests (and few would disagree) that the study of consciousness has been vastly complicated by differing or inadequate definitions. This has led philosophers to talk past each other or work themselves into confusions. Above all, Honderich thinks virtually everyone has at some point fallen into circularity, smuggling into their definitions terms that already include consciousness in one form or another.

He sets out five leading ideas: these are not actually the five parts into which he would carve consciousness himself (he would analyse it into three: perceptual, cognitive and affective consciousness) but these are the ideas he feels we need to address. They are: qualia, ‘something it is like for a thing to be that thing’, subjectivity, intentionality, and phenomenality. More normally these days we divide the field in two initially, and at first glance four of Honderich’s views look like the same thing from different angles. When there is ‘something it is like’, that’s the phenomenal aspect of experience as had by a subject and characterised by the presence of qualia. But let’s not be hasty.

Having reviewed briefly what various people have said about qualia, Honderich notes that one thing seems clear; that it is always conceived of as distinct from, and hence accompanied by, another form of consciousness. Some people certainly assert that qualia are the real essence of consciousness or at any rate of the interesting part of it; but it does seem to be true that no-one proposes conscious states that include qualia and nothing else. That in itself doesn’t amount to circularity, though.

The next leading idea is something it is like to see red, or whatever. Nagel’s phrase is unhelpful but somehow powerfully persuasive. We all sort of know what it is getting at. Honderich notes that Nagel himself offered an improved version that leaves out the suggestion that a comparison is going on; to be conscious, in this version, is for there to be something that is how it is for you (to see red or whatever). What does this all really mean? Honderich suspects that it comes down to there being something it is like, or something that is how it is, for you to be conscious (of something red, eg), once again a case of circularity. I don’t really see it; it seems to me that Nagel offers an equation; consciousness is there being something it is like; Honderich pounces: Aha! But there being something it is like is being conscious! That just seems to be travelling back from the second term of the equation to the first, not showing that the the second term requires or contains the first. I’m simplifying rather a lot, so perhaps I’ve missed something. But so far as I can see while Honderich justly complains that the formula is uninformative, the only circularity is one he inserted himself.

Subjectivity for Honderich means the existence of a subject. The word, as he acknowledges, can often be used as more or less a synonym for one of the two senses already discussed: in fact I should say that that is the standard meaning. But it’s true that consciousness is tied up with the notion of an experiencing self or subject (and those who deny the existence of one are often sceptical about the other). Honderich suggests that it is implicit in the idea of a subject that the subject is conscious, and though we can raise quibbles over sleeping or anaesthetised subjects, he is surely on firmer ground in seeing circularity here. To define consciousness in terms of a subject is circular because to be a subject you have to be conscious. But nobody does that, or at least, no-one I can think of. It’s sort of accepted that you need to have your consciousness sorted out before you can have your conscious agent.

With intentionality we come on to something distinctly different; this is the quality of aboutness or directedness singled out by Brentano. Honderich bases his comments on Brentano’s account, which he quotes at full length. It’s only fair to note in passing that Brentano was not talking about consciousness; rather he asserted that intentionality was the mark of the mental; but there is obviously a connection and we might well try to argue that intentionality was constitutive of consciousness.

Honderich notes an intractable problem over the objects of intentionality; they don’t have to exist. We can think about imaginary or even absurd things just as easily as about real ones. But if we are not thinking about a real slipper when we think of Cinderella’s glass one, then surely we’re not really thinking about the actual one the dog is chewing in the corner, either; perhaps the real objects are just our ideas or images of the slipper, or whatever. If we don’t take that path, suggests Honderich, then this intentionality business is no great help; if we do suppose that thinking about things is thinking about a mental image, then we’re back with circularity because it would have to be a conscious image, wouldn’t it?

Would it? I’m not totally sure it would; wouldn’t the theory be that it becomes a conscious image when it’s an object of conscious thought, but not otherwise or in itself? Otherwise we seem to have a weird doubling up going on. But anyway, it’s too clear, in my opinion, that thinking about a thing is thinking about that thing, not thinking about an idea of it; we have to find some other way round the problem of non-existent objects of thought. So we’re left with the complaint that intentionality does not explain consciousness – and that’s true enough; it’s at least as much a part of the problem.

With phenomenality we’re back with a word that could be taken as meaning much the same as subjectivity, and referring to the same stuff as qualia or something it is like. Honderich draws on Ned Block’s much-cited distinction between access or a-consciousness and phenomenal or p-consciousness, and attacks David Chalmers for having said that qualia, subjectivity, phenomenality and so on, are essentially different ways of talking about the same thing. I’m with Chalmers; yes, the different terms embody different ways of approaching the problem, but there’s little doubt that if we could crack one, translating the solution into terms of the other approaches would be relatively trivial. Oddly, instead of recapitulating the claimed important distinctions, which woulod seem the natira;l thing to do at this point, Honderich seems to argue that if Chalmers thinks all these things are the same thing, they must in fact all be examples of something more fundamental, in which case why doesn’t Chalmers talk about the fundamental thing?

If there are the grammatical or subtle differences between the terms for the phenomena, the things do make up ‘approximately the same class of phenomena’. What class is that? To speak differently, what are these things examples of ? In fact didn’t we have to have some idea of that in order to bring the examples together in the first place? What brings the different things together? What is this general fact of consciousness? It has to exist, doesn’t it? Chalmers, I’d say, has credit for bringing the things together, but he might have asked about the general fact, mightn’t he?

This is strange; to assert that a number of terms refer to the same thing is not necessarily to assert that they are all in fact yet another thing. My best guess is that Honderich wants to manoeuvre Chalmers into seeming circularity again, but if so I don’t think it comes off.

Honderich goes on to an extended review of the territory and what others have said, but I propose to cut to the chase. Cutting to the chase, by the way, is something Honderich himself is very bad at, or rather pathologically averse to. He has a style all his own, characterised by a constant backing away from saying anything directly; he prefers to mention a few things one might say in relation to this matter, some indeed that others have at times suggested he himself might not always have been committed to denial of, that is to say these considerations might be ones – not by any means to characterise exhaustively, but nevertheless to bring forward as previously hinted – perhaps to be felt to be most significantly indicated or at any rate we might choose, not yet to think, but to entertain the possibility, of considering as such. Sometimes you really want to kick him.

Anyway, to get to the point: Honderich is an externalist; he thinks your perception of x is something that happens out there where x is real and physical, not in your head. There is an extension to this to take care of those cases where we think about things that are imaginary or otherwise non-physical; in such cases the same thing is going on, it’s just that the objects perceived are representations in our mind. In a sense this is externalism simply redirected to objects that happen to be internal. Of course, how anything comes to be a mental representation is itself a non-trivial issue.

Honderich says that for you to be conscious of something is for something to be real, to exist. This puzzling or vacuous-seeming formula is underpinned by the eyebrow-raising idea of subjective physicality. This is like a kind of Philosopher’s Stone; it means that what we perceive can be both actual and physical in a perfectly normal way, yet subjective in the way required by consciousness. How can we possibly eat our cake this way and yet still have it? It’s kind of axiomatic that the actual qualities of physical things don’t depend on the observer (yes, I know in modern physics that’s a can of worms, but not one we need to open here), while subjective qualities absolutely do; my subjective impressions may be quite different to yours.

How is this trick to be pulled off?

The general answer to the question of what is actual with your perceptual consciousness, putting aside that in which it may issue immediately, is a part, piece or stage of a subjective physical world of several dependencies, out there in space, and nothing else whatever. Your being conscious now is exactly and nothing more than this severally-dependent fact external to you of a room’s existing…

It looks at first sight as if this talk of worlds may be the answer. The room exists subjectively for you and also physically, but in a world of its own, or of your own, which would explain how it can be different from the subjective experience of others; they have different worlds. This would be alarming, however, because it suggests a proliferation of worlds whose relationships would be problematic and whose ontology profligate. We’re talking some serious and bizarre metaphysics, of which there is no sign elsewhere.

I don’t think that is at all what Honderich has in mind. Instead I think we need to remember that he doesn’t mean by subjectivity what everyone else means. He just means there is a subject. So his description of consciousness comes down to this; that there is a real object of consciousness, whether in the world or in the brain, which is unproblematically physical; and there is a subject, also physical, who is conscious of the thing.

Is that it? It seems sort of underwhelming. But I fear that is the essence of it.  Helpfully, Honderich provides a table of his proposed structure:

honderich table

Yes, that seems to me to confirm the suggested interpretation.

So it kind of looks as if Honderich has used a confusingly non-standard definition and ended up with a theoretical position which honestly sheds little light on the real issue; yet these were the very problems he criticised in earlier approaches. I can’t deny that I have greatly simplified here and it might be that I missed the key somewhere in one of those many chapters – but frankly I’m not going back to look again.