Posts tagged ‘consciousness’

mind the gapA better neurophysiology, the answer to the Hard Problem? Kirchhoff and Hutto propose a slightly different way forward.

The Hard Problem, of course, is about reconciling the physical description of a conscious event with the way it feels from inside. This is the ‘explanatory gap’. Most of us these days are monists of one kind or another; we believe the world ultimately consists of one kind of thing, usually matter, without a second realm of spirits or other metaphysical entities on top. Some people would, accordingly seek to reduce the mental to the physical, perhaps even eliminating the mental so that our monism can be tidy ((I’m a messy monist myself). Neurophysiology, as formulated by Varela and briefly described in Kirchhoff and Hutto’s paper, does not look for a reduction, merely an explanation.
It does this by putting aside any idea of representations or computations; instead it proposes a practical research programme in which introspective reports of experience are matched with scans or other physical investigations. By elucidating the structure of both experience and physical event, the project aims to show how the two sides of experience constrain each other.

This, though, doesn’t seem enough for Kirchhoff and Hutto. Researching the two sides of the matter together is fine, but how will it ever show constraints, or generate an explanation? it seems it will be doomed to merely exhibiting correlation. Moreover, rather than resolving the explanatory gap, this approach seems to consolidate it.
These are reasonable objections, but I don’t think it’s quite as hopeless as that. The aspiration must surely be that the exploration comes together by exhibiting, not just correlation, but an underlying identity of structure? We might hope that the physical structure of the visual cortex tells us something about our colour space and visual experience that matches the structure of 0ur direct experience of colour, for example, in such a way that the mysterious quality of that experience is attenuated and eventually even dispelled. Other kinds of explanation might emerge. When I take off my glasses and look at the surface of brightly lit swimming pool, I see a host of white circles, all the same size and filled with the suggestion of a moire pattern, bobbing daintily about. In a pre-scientific era, this would have been hard to account for, but now I know it is entirely the result of some facts about the shape of my eyes and the lenses in them, and phenomenological worries don’t even get started. It could be that neurophilosophy can succeed in offering explanations good enough to remove the worries that currently exist. The great thing about it, of course, is that even if that hope is philosophically misplaced, elucidating the structure of experience from both ends is a very worthwhile project anyway, one that can surely only yield valuable new understanding.

However, what Kirchhoff and Hutto propose is that we go a little further and abolish the gap. Instead of affirming the separateness of the physical and the phenomenal, they suggest, we should recognise that they represent to different descriptions of a single thing.

That might seem a modest adjustment, but they also assert that the phenomenal character of experience actually arises not from the mere physics, but from the situation of that experience, taking place in an enactive, embodied context. So if we hold a book, we can see it; if we shut our eyes, we continue to feel it; but we also have a more complex engagement with it from our efforts to hold up what we know is a book, the feel of pages, and so on. There’s all sorts of stuff going on that isn’t the mere physical contact, and that’s what yields the character of the experience.

I see that, I think, but it’s a little odd. If we imagine floating in a sensory deprivation tank and gazing at a smooth, uniform red wall, we seem to be free of a lot of the context we’d normally have and on this view it’s a bit hard to see where the phenomenal intensity would be coming from (perhaps from the remembered significance of red?) We might suspect that Kirchhoff and Hutto are getting their phenomenal content smuggled in with the more complex phenomenal experience that they implicitly demand by requiring context, an illicit supplement that remains unexplained.

On this, why not let a thousand flowers grow; go ahead and develop explanations according to any exploratory project you prefer, and then we’ll have a look. Some of them might be good even if your underlying theory is wrong.
I think it is, incidentally. For me the explanatory gap is always misconstrued; the real gap is not between physics and phenomenology, it’s between theory and actuality, something that shouldn’t puzzle us, or at least not in the way it always does.

cakeThe Stanford Encyclopaedia of Philosophy is twenty years old. It gives me surprisingly warm feelings towards Stanford that this excellent free resource exists. It’s written by experts, continuously updated, and amazingly extensive. Long may it grow and flourish!

Writing an encyclopaedia is challenging, but an encyclopaedia of philosophy must take the biscuit. For a good encyclopaedia you need a robust analysis of the topics in the field so that they can be dealt with systematically, comprehensively, and proportionately. In philosophy there is never a consensus, even about how to frame the questions, never mind about what kind of answers might be useful. This must make it very difficult: do you try to cover the most popular schools of thought in an area? All the logically possible positions one might take up?  A purely historical survey? Or summarise what the landscape is really like, inevitably importing your own preconceptions?

I’ve seen people complain that the SEP is not very accessible to newcomers, and I think the problem is partly that the subject is so protean. If you read an article in the SEP, you’ll get a good view and some thought-provoking ideas; but what a noob looks for are a few pointers and landmarks. If I read a biography I want to know quickly about the subject’s  main works, their personal life, their situation in relation to other people in the field, the name of their theory or school, and so on.  Most SEP subject articles cannot give you this kind of standard information in relation to philosophical problems. There is a real chance that if you read up a SEP article and then go and talk to professionals, they won’t really get what you’re talking about. They’ll look at you blankly and then say something like:

“Oh, yes, I see where you’re coming from, but you know, I don’t really think of it that way…”

It’s not because the article you read was bad, it’s because everyone has a unique perspective on what the problem even is.

Let’s look at Consciousness. The content page has:

consciousness (Robert Van Gulick)

  • animal (Colin Allen and Michael Trestman)
  • higher-order theories (Peter Carruthers)
  • and intentionality (Charles Siewert)
  • representational theories of (William Lycan)
  • seventeenth-century theories of (Larry M. Jorgensen)
  • temporal (Barry Dainton)
  • unity of (Andrew Brook and Paul Raymont)

All interesting articles, but clearly not a systematic treatment based on a prior analysis. It looks more like the set of articles that just happened to get written with consciousness as part of the subject. Animal consciousness, but no robot consciousness? Temporal consciousness, but no qualia or phenomenal consciousness? But I’m probably looking in the wrong place.

In Robert Van Gulick’s main article we have something that looks much more like a decent shot at a comprehensive overview, but though he’s done a good job it won’t be a recognisable structure to anyone who hasn’t read this specific article. I really like the neat division into descriptive, explanatory, and functional questions; it’s quite helpful and illuminating: but you can’t rely on anyone recognising it (Next time you meet a professor of philosophy ask him: if we divide the problems of consciousness into three, and the first two are descriptive and explanatory, what would the third be? Maybe he’ll say  ‘Functional’, but maybe he’ll say ‘Reductive’ or something else – ‘Intentional’ or ‘Experiential'; I’m pretty sure he’ll need to think about it). Under ‘Concepts of Consciousness’ Van Gulick has ‘Creature Consciousness': our noob would probably go away imagining that this is a well-known topic which can be mentioned in confident expectation of the implications being understood. Alas, no: I’ve read quite a few books about consciousness and can’t immediately call to mind any other substantial reference to ‘Creature Consciousness': I’m pretty sure that unless you went on to explain that you were differentiating it from ‘State Consciousness’ and ‘Consciousness as an Entity’, you might be misunderstood.

None of this is meant as a criticism of the piece: Van Gulick has done a great job on most counts (the one thing I would really fault is that the influence of AI in reviving the topic and promoting functionalist views is, I think, seriously underplayed). If you read the piece you  will get about as good a view of the topic as that many words could give you, and if you’re new to it you will run across some stimulating ideas (and some that will strike you as ridiculous). But when you next read a paper on philosophy of mind, you’ll still have to work out from scratch how the problem is being interpreted. That’s just the way it is.

Does that mean philosophy of mind never gets anywhere? No, I really don’t think so, though it’s outstandingly hard to provide proof of progress. In science we hope to boil down all the hypotheses to a single correct theory: in philosophy perhaps we have to be happy that we now have more answers (and more problems) than ever before.

And the SEP has got most of them! Happy Birthday!

transferThe new film Self/less is based around the transfer of consciousness. An old man buys a new body to transfer into, and then finds that contrary to what he was told, it wasn’t grown specially: there was an original tenant who moreover, isn’t really gone. I understand that this is not a film that probes the metaphysics of the issue very deeply; it’s more about fight scenes; but the interesting thing is how readily we accept the idea of transferred consciousness.
In fact, it’s not at all a new idea; if memory serves, H.G.Wells wrote a short story on a similar theme; a fit young man with no family is approached by a rich old man in poor health who apparently wants to leave him all his fortune; then he finds himself transferred unwillingly to the collapsing ancient body and the old man making off in his fresh young one. In Wells’ version the twist is that the old man gets killed in a chance traffic accident, thereby dying before his old body does anyway.
The thing is, how could a transfer possibly work? In Wells’ story it’s apparently done with drugs, which is mysterious; more normally there’s some kind of brain-transfer helmet thing. It’s pretty much as though all you needed to do was run an EEG and then reverse the polarity. That makes no sense. I mean, scanning the brain in sufficient detail is mind-boggling to begin with, but the idea that you could then use much the same equipment to change the content of the mind is in another league of bogglement. Weather satellites record the meteorology of the world, but you cannot use them to reset it. This is why uploading your mind to a computer, while highly problematic, is far easier to entertain than transferring it to another biological body.
The big problem is that part of the content of the brain is, in effect, structural. It depends on which neurons are attached to which (and for that matter, which neurons there are), and on the strength and nature of that linkage. It’s true that neural activity is important too, and we can stimulate that electrically; even with induction gear that resembles the SF cliché: but we can’t actually restructure the brain that way.
The intuition that transfers should be possible perhaps rests on an idea that the brain is, as it were, basically hardware, and consciousness is essentially software; but isn’t really like that. You can’t run one person’s mind on another’s brain.
There is in fact no reason to suppose that there’s much of a read-across between brains: they may all be intractably unique. We know that there tends to be a similar regionalisation of functions in the brain, but there’s no guarantee that your neural encoding of ‘grandmother’ resembles mine or is similarly placed. Worse, it’s entirely possible that the ‘meaning’ of neural assemblages differs according to context and which other structures are connected, so that even if I could identify my ‘grandmother’ neurons, and graft them in in place of yours, they would have a different significance, or none.
Perhaps we need a more sophisticated and bespoke approach. First we thoroughly decipher both brains, and learn their own idiosyncratic encoding works. Then we work out a translator. This is a task of unimaginable complexity and particularity, but it’s not obviously impossible in principle. I think it’s likely that for each pair of brains you would need a unique translator: a universal one seems such an heroic aspiration that I really doubt its viability: a universal mind map would be an achievement of such interest and power that merely transferring minds would seem like time-wasting games by comparison.
I imagine that even once a translator had been achieved, it would normally only achieve partial success. There would be a limit to how far you could go with nano-bot microsurgery; and there might be certain inherent limitations. Certain ideas, certain memories, might just be impossible to accommodate in the new brain because of their incompatibility with structural or conceptual issues that were too deeply embedded to change; there would be certain limitations. The task you were undertaking would be like the job of turning Pride and Prejudice into Don Quixote simply by moving the words around and perhaps in a few key cases allowing yourself one or two anagrams: the result might be recognisable, but it wouldn’t be perfect. The transfer recipient would believe themselves to be Transferee, but they would have strange memory gaps and certain cognitive deficits, perhaps not unlike Alzheimer’s, as well as artefacts: little beliefs or tendencies that existed neither in Transferee or Recipient, but were generated incidentally through the process of reconstruction.
It’s a much more shadowy and unappealing picture, and it makes rather clearer the real killer: that though Recipient might come to resemble Transferee, they wouldn’t really be them.
In the end, we’re not data, or a program; we’re real and particular biological entities, and as such our ontology is radically different. I said above that the plausibility of transfers comes from thinking of consciousness as data, which I think is partly true: but perhaps there’s something else going on here; a very old mental habit of thinking of the soul as detachable and transferable. This might be another case where optimists about the capacity of IT are unconsciously much less materialist than they think.

whistleAn interesting paper in Behavioural and Brain Sciences from Morsella, Godwin, Jantz, Krieger, and Gazzaley, reported here: an accessible pdf draft version is here.

It’s quite a complex, thoughtful paper, but the gist is clearly that consciousness doesn’t really do that much. The authors take the view that many functions generally regarded as conscious are in fact automatic and pre- or un-conscious: what consciousness hosts is not the process but the results. It looks to consciousness as though it’s doing the work, but really it isn’t.

In itself this is not a new view, of course. We’ve heard of other theories that base their interpretation on the idea that consciousness only deals with small nuggets of information fed to it by unconscious processes. Indeed, as the authors acknowledge, some take the view that consciousness does nothing at all: that it is an epiphenomenon, a causal dead end, adding no more to human behaviour than the whistle adds to the locomotive.

Morsella et al don’t go that far. In their view we’re lacking a clear idea of the prime function of consciousness; their Passive Frame Theory holds that the function is to constrain and direct skeletal muscle output, thereby yielding adaptive behaviour. I’d have thought quite a lot of unconscious processes, even simple reflexes, could be said to do that too; philosophically I think we’d look for a bit more clarity about the characteristic ways in which consciousness as opposed to instinct or other unconscious urges influence behaviour; but perhaps I’m nit-picking.

The authors certainly offer explanation as to what consciousness does. In their view, well-developed packages are delivered to consciousness from various unconscious functions. In consciousness these form a kind of combinatorial jigsaw, very regularly refreshed in a conscious/unconscious cycle; the key thing is that these packages are encapsulated and cannot influence each other. This is what distinguishes the theory from the widely popular idea of a Global Workspace, originated by Bernard Baars; no work is done on the conscious contents while they’re there. they just sit until refreshed or replaced.

The idea of encapsulation is made plausible by various examples. When we recognise an illusion, we don’t stop experiencing it; when we choose not to eat, we don’t stop feeling hungry, and so on. It’s clearly the case that sometimes this happens, but can we say that there are really no cases where one input alters our conscious perception of another? I suspect that any examples we might come up with would be deemed by Morsella et al to occur in pre-conscious processing and only seem to happen in consciousness: the danger with that is that the authors might end up simply disqualifying all counter-examples and thereby rendering their thesis unfalsifiable. It would help if we could have a sharper criterion of what is, and isn’t, within consciousness.

As I say, the authors do hold that consciousness influences behaviour, though not by its own functioning; instead it does so by, in effect, feeding other unconscious functions. An analogy with the internet is offered: the net facilitates all kind of functions; auctions, research, social interaction, filling the world with cat pictures, and so on; but it would be quite right to say that in itself it does any of these things.

That’s OK, but it seems to delegate an awful lot of things that we might have regarded as conscious cognitive activity to these later unconscious functions, and it would help to have more of an account of how they do their thing and how consciousness contrives to facilitate. It seems it merely brings things together, but how does that help? If they’re side by side but otherwise unprocessed, I’m not sure what the value of merely juxtaposing them (in some sense which is itself a little unclear) amounts to.

So I think there’s more to do if Passive Frame Theory is going to be a success; but it’s an interesting start.

imageThe recent short NYT series on robots has a dying fall. The articles were framed as an investigation of how robots are poised to change our world, but the last piece is about the obsolescence of the Aibo, Sony’s robot dog. Once apparently poised to change our world, the Aibo is no longer made and now Sony will no longer supply spare parts, meaning the remaining machines will gradually cease to function.
There is perhaps a message here about the over-selling and under-performance of many ambitious AI projects, but the piece focuses instead on the emotional impact that the ‘death’ of the robot dogs will have on some fond users. The suggestion is that the relationship these owners have with their Aibo is as strong as the one you might have with a real dog. Real dogs die, of course, so though it may be sad, that’s nothing new. Perhaps the fact that the Aibos are ‘dying’ as the result of a corporate decision, and could in principle have been immortal makes it worse? Actually I don’t know why Sony or some third party entrepreneur doesn’t offer a program to virtualise your Aibo, uploading it into software where you can join it after the Singularity (I don’t think there would really be anything to upload, but hey…).
On the face of it, the idea of having a real emotional relationship with an Aibo is a little disturbing. Aibos are neat pieces of kit, designed to display ’emotional’ behaviour, but they are not that complex (many orders of magnitude less complex than a dog, surely), and I don’t think there is any suggestion that they have any real awareness or feelings (even if you think thermostats have vestigial consciousness, I don’t think an Aibo would score much higher. If people can have fully developed feelings for these machines, it strongly suggests that their feelings for real dogs have nothing to do with the dog’s actual mind. The relationship is essentially one-sided; the real dog provides engaging behaviour, but real empathy is entirely absent.
More alarming, it might be thought to imply that human relationships are basically the same. Our friends, our loved ones, provide stimuli which tickle us the right way; we enjoy a happy congruence of behaviour patterns, but there is no meeting of minds, no true understanding. What’s love got to do with it, indeed?
Perhaps we can hope that Aibo love is actually quite distinct from dog love. The people featured in the NYT video are Japanese, and it is often said that Japanese culture is less rigid about the distinction between animate and inanimate than western ideas. In Christianity, material things lack souls and any object that behaves as if it had one may be possessed or enchanted in ways that are likely to be unnatural and evil. In Shinto, the concept of kami extends to anything important or salient, so there is nothing unnatural or threatening about robots. But while that might validate the idea of an Aibo funeral, it does not precisely equate Aibos and real dogs.
In fact, some of the people in the video seem mainly interested in posing their Aibos for amusing pictures or video, something they could do just as well with deactivated puppets. Perhaps in reality Japanese culture is merely more relaxed about adults amusing themselves with toys?
Be that as it may, it seems that for now the era of robot dogs is already over…

stance 23Dan Dennett famously based his view of consciousness on the intentional stance. According to him the attribution of intentions and other conscious states is a most effective explanatory strategy when applied to human beings, but that doesn’t mean consciousness is a mysterious addition to physics. He compares the intentions we attribute to people with centres of gravity, which also help us work out how things will behave, but are clearly not a new set of real physical entities.

Whether you like that idea or not, it’s clear that the human brain is strongly predisposed towards attributing purposes and personality to things. Now a new study by Spunt, Meyer and Lieberman using FMRI provides evidence that even when the brain is ostensibly not doing anything, it is in effect ready to spot intentions.

This is based on findings that similar regions of the brain are active both in a rest state and when making intentional (but not non-intentional) judgements, and that activity in the pre-frontal cortex of the kind observed when the brain is at rest is also associated with greater ease and efficiency in making intentional attributions.

There’s always some element of doubt about how ambitious we can be in interpreting what FMRI results are telling us, and so far as I can see it’s possible in principle that if we had a more detailed picture than FMRI can provide we might see more significant differences between the rest state and the attribution of intentions; but the researchers cite evidence that supports the view that broad levels of activity are at least a significant indicator of general readiness.

You could say that this tells us less about intentionality and more about the default state of the human mind. Even when at rest, on this showing, the brain is sort of looking out for purposeful events. In a way this supports the idea that the brain is never naturally quiet, and explains why truly emptying the mind for purposes of deep meditation and contemplation might require deliberate preparation and even certain mental disciplines.

So far as consciousness itself is concerned, I think the findings lend more support to the idea that having ‘theory of mind’ is an essential part of having a mind: that is, that being able to understand the probable point of view and state of knowledge of other people is a key part of having full human-style consciousness yourself.

There’s obviously a bit of a danger of circularity there, and I’ve never been sure it’s a danger that Dennett for one escapes. I don’t know how you attribute intentions to people unless you already know what intentions are. The normal expectation would be that I can do that because I have direct knowledge of my own intentions, so all I need to do is hypothesise that someone is thinking the way I would think if I were in their shoes. In Dennett’s theory, me having intentions is really just more attribution (albeit self-attribution), so we need some other account of how it all gets started (apparently the answer is that we assume optimal intentions in the light of assumed goals).

Be that as it may, the idea that consciousness involves attributing conscious states to ourselves is one that has a wider appeal and it may shed a slightly different light on the new findings. It might be that the base activity identified by the study is not so much a readiness to attribute intentions, but a continuous second-order contemplation of our own intentions, and an essential part of normal consciousness. This wouldn’t mean the paper’s conclusions are wrong, but it would suggest that it’s consciousness itself that makes us more ready to attribute intentions.

Hard to test that one because unconscious patients would not make co-operative subjects…

Fry's phenomenologiesOver at Brains Blog Uriah Kriegel has been doing a series of posts (starting here) on some themes from his book The Varieties of Consciousness, and in particular his identification of six kinds of phenomenology.

I haven’t read the book (yet) and there may be important bits missing from the necessarily brief account given in the blog posts, but it looks very interesting. Kriegel’s starting point is that we probably launch into explaining consciousness too quickly, and would do well to spend a bit more time describing it first. There’s a lot of truth in that; consciousness is an extraordinarily complex and elusive business, yet phenomenology remains in a pretty underdeveloped state. However, in philosophy the borderline between describing and explaining is fuzzy; if you’re describing owls you can rely on your audience knowing about wings and beaks and colouration; in philosophy it may be impossible to describe what you’re getting at without hacking out some basic concepts which can hardly help but be explanatory. With that caveat, it’s a worthy project.

Part of the difficulty of exploring phenomenology may come from the difficulty of reconciling differences in the experiences of different reporters. Introspection, the process of examining our own experience, is irremediably private, and if your conclusions are different from mine, there’s very little we can do about it other than shout at each other. Some have also taken the view that introspection is radically unreliable in any case, a task like trying to watch the back of your own head; the Behaviourists, of course, concluded that it was a waste of time talking about the contents of consciousness at all: a view which hasn’t completely disappeared.

Kriegel defends introspection, albeit in a slightly half-hearted way. He rightly points out that we’ve tacitly relied on it to support all the discoveries and theorising which has been accomplished in recent decades. He accepts that we cannot any longer regard it as infallible, but he’s content if it can be regarded as more likely right than wrong.

With this mild war-cry, we set off on the exploration. There are lots of ways we can analyse consciousness, but what Kriegel sets out to do is find the varieties of phenomenal experience. He’s come up with six, but it’s a tentative haul and he’s not asserting that this is necessarily the full set. The first two phenomenologies, taken as already established, are the perceptual and the algedonic (pleasure/pain); to these Kriegel adds: cognitive phenomenology, “conative” phenomenology (to do with action and intention), the phenomenology of entertaining an idea or a proposition (perhaps we could call it ‘considerative’, though Kriegel doesn’t), and the phenomenology of imagination.

The idea that there is conative phenomenology is a sort of cousin of the idea of an ‘executive quale’ which I have espoused: it means there is something it is like to desire, to decide, and to intend. Kriegel doesn’t spend any real effort on defending the idea that these things have phenomenology at all, though it seems to me (introspectively!) that sometimes they do and sometimes they don’t. What he is mainly concerned to do is establish the distinction between belief and desire. In non-phenomenal terms these two are sort of staples of the study of intentionality: Bel and Des, the old couple. One way of understanding the difference is in terms of ‘direction of fit’, a concept that goes back to J.L. Austin. What this means is that if there’s a discrepancy between your beliefs and the world, then you’d better change your beliefs. If there’s a discrepancy  between your desires and the world, you try to change the world (usually: I think Andy Warhol for one suggested that learning to like what was available was a better strategy, thereby unexpectedly falling into a kind of agreement with some religious traditions that value acceptance and submission to the Divine Will).

Kriegel, anyway, takes a different direction, characterising the difference in terms of phenomenal presentation. What we desire is presented to us as good; what we believe is presented as true. This approach opens the way to a distinction between a desire and a decision: a desire is conditional (if circumstances allow, you’ll eat an ice-cream) whereas a decision is categorical (you’re going to eat an ice-cream). This all works quite well and establishes an approach which can handily be applied to other examples; if  we find that there’s presentation-as-something different going on we should suspect a unique phenomenology. (Are we perhaps straying here into something explanatory instead of merely descriptive? I don’t think it matters.) I wonder a bit about whether things we desire are presented to us as good. I think I desire some things that don’t seem good at all except in the sense that they seem desirable. That’s not much help, because if we’re reduced to saying that when I desire something it is presented to me as desirable we’re not saying all that much, especially since the idea of presentation is not particularly clarified. I have no doubt that issues like this are explored more fully in the book.
Kriegel moves on to consider the case of emotion: does it have a unique and irreducible phenomenology? If something we love is presented to us as good, then we’re back with the merely conative; and Kriegel doesn’t think presentation as beautiful is going to work either (partly because of negative cases, though I don’t see that as an insoluble probem myself; if we can have algedonia, the combined quality of pain or pleasure we can surely have an aesthetic quality that combines beauty and ugliness). In the end he suspects that emotion is about presentation as important, but he recognises that this could be seen as putting the cart before the horse; perhaps emotion directs our attention to things and what gets our attention seems to be important. Kriegel finds it impossible to decide whether emotion has an independent phenomenology and gives the decision by default in favour of the more parsimonious option, that it is reducible to other phenomenologies.
On that, it may be that taking all emotion together was just too big a bite. It seems quite likely to me that different emotions might have different phenomenologies, and perhaps tackling it that way would yield more positive results.
Anyway, a refreshing look at consciousness.

Ava2I finally saw Ex Machina, which everyone has been telling me is the first film about artificial intelligence you can take seriously. Competition in that area is not intense, of course: many films about robots and conscious computers are either deliberately absurd or treat the robot as simply another kind of monster. Even the ones that cast the robots as characters in a serious drama are essentially uninterested in their special nature and use them as another kind of human, or at best to make points about humanity. But yes: this one has a pretty good grasp of the issues about machine consciousness and even presents some of them quite well, up to and including Mary the Colour Scientist. (Spoilers follow.)

If you haven’t seen it (and I do recommend it), the core of the story is a series of conversations between Caleb, a bright but naive young coder, and Ava, a very female robot. Caleb has been told by Nathan, Ava’s billionaire genius creator, that these conversations are a sort of variant Turing Test. Of course in the original test the AI was a distant box of electronics: here she’s a very present and superficially accurate facsimile of a woman. (What Nathan has achieved with her brain is arguably overshadowed by the incredible engineering feat of the rest of her body. Her limbs achieve wonderful fluidity and power of movement, yet they are transparent and we can see that it’s all achieved with something not much bigger than a large electric cable. Her innards are so economical there’s room inside for elegant empty spaces and decorative lights. At one point Nathan is inevitably likened to God, but on anthropomorph engineering design he seems to leave the old man way behind.)

Why does she have gender? Caleb asks, and is told that without sex humans would never have evolved consciousness; it’s a key motive, and hell, it’s fun.  In story terms making Ava female perhaps alludes to the origin of the Turing Test in the Imitation Game, which was a rather camp pastime about pretending to be female played by Turing and his friends. There are many echoes and archetypes in the film; Bluebeard, Pygmalion, Eros and Psyche to name but three; all of these require that Ava be female. If I were a Jungian I’d make something of that.

There’s another overt plot reason, though; this isn’t really a test to determine whether Ava is conscious, it’s about whether she can seduce Caleb into helping her escape. Caleb is a naive girl-friendless orphan; she has been designed not just as a female but as a match for Caleb’s preferred porn models (as revealed in the search engine data Nathan uses as his personal research facility – he designed the search engine after all). What a refined young Caleb must be if his choice of porn revolves around girls with attractive faces (on second thoughts, let’s not go there).

We might suspect that this test is not really telling us about Ava, but about Caleb. That, however, is arguably true of the original Turing Test too.  No output from the machine can prove consciousness; the most brilliant ones might be the result of clever tricks and good luck. Equally, no output can prove the absence of consciousness. I’ve thought of entering the Loebner prize with Swearbot, which merely replies to all input with “Shut the fuck up” – this vividly resembles a human being of my acquaintance.

There is no doubt that the human brain is heavily biased in favour of recognising things as human. We see faces in random patterns and on machines; we talk to our cars and attribute attitudes to plants. No doubt this predisposition made sense when human beings were evolving. Back then, the chances of coming across anything that resembled a human being without it being one were low, and given that an unrecognised human might be a deadly foe or a rare mating opportunity the penalties for missing a real one far outweighed those for jumping at shadows or funny-shaped trees now and then.

Given all that, setting yourself the task of getting a lonely young human male romantically interested in something not strictly human is perhaps setting the bar a bit low. Naked shop-window dummies have pulled off this feat. If I did some reprogramming so that the standard utterance was a little dumb-blonde laugh followed by “Let’s have fun!” I think even Swearbot would be in with a chance.

I think the truth is that to have any confidence about an entity being conscious, we really need to know something about how it works. For human beings the necessary minimum is supplied by the fact that other people are constituted much the same way as I am and had similar origins, so even though I don’t know how I work, it’s reasonable to assume that they are similar. We can’t generally have that confidence with a machine, so we really need to know both roughly how it works and – bit of a stumper this – how consciousness works.

Ex Machina doesn’t have any real answers on this, and indeed doesn’t really seek to go much beyond the ground that’s already been explored. To expect more would probably be quite unreasonable; it means though, that things are necessarily left rather ambiguous.

It’s a shame in a way that Ava resembles a real woman so strongly. She wants to be free (why would an AI care, and why wouldn’t it fear the outside world as much as desire it?), she resents her powerlessness; she plans sensibly and even manipulatively and carries on quite normal conversations. I think there is some promising scope for a writer in the oddities that a genuinely conscious AI’s assumptions and reasoning would surely betray, but it’s rarely exploited; to be fair Ex Machina has the odd shot, notably Ava’s wish to visit a busy traffic intersection, which she conjectures would be particularly interesting; but mostly she talks like a clever woman in a cell. (Actually too clever: in that respect not too human).

At the end I was left still in doubt. Was the take-away that we’d better start thinking about treating AIs with the decent respect due to a conscious being? Or was it that we need to be wary of being taken in by robots that seem human, and even sexy, but in truth are are dark and dead inside?

world alterBernardo Kastrup has some marvellous invective against AI engineers in this piece…

The computer engineer’s dream of birthing a conscious child into the world without the messiness and fragility of life is an infantile delusion; a confused, partial, distorted projection of archetypal images and drives. It is the expression of the male’s hidden aspiration for the female’s divine power of creation. It represents a confused attempt to transcend the deep-seated fear of one’s own nature as a living, breathing entity condemned to death from birth. It embodies a misguided and utterly useless search for the eternal, motivated only by one’s amnesia of one’s own true nature. The fable of artificial consciousness is the imaginary band-aid sought to cover the engineer’s wound of ignorance.

I have been this engineer.

I think it’s untrue, but you don’t have to share the sentiment to appreciate the splendid rhetoric.

Kastrup distinguishes intelligence, which is a legitimate matter of inputs, outputs and the functions that connect them, from consciousness, the true what-it-is likeness of subjectivity. In essence he just doesn’t see how setting up functions in a machine can ever touch the latter.

Not that Kastrup has a closed mind, he speaks approvingly of Pentti Haikonen’s proposed architecture; he just doesn’t think it works. As Kastrup sees it Haikonen’s network merely gathers together sparks of consciousness: it then does a plausible job of bringing them together to form more complex kinds of cognition, but in Kastrup’s eyes it assumes that consciousness is there to be gathered in the first place: that it exists out there in tiny parcels amendable to this kind of treatment. There is in fact, he thinks, absolutely no reason to think that this kind of panpsychism is true: no reason to think that rocks or drops of water have any kind of conscious experience at all.

I don’t know whether that is the right way to construe Haikonen’s project (I doubt whether gathering experiential sparks is exactly what Haikonen supposed he was about). Interestingly, though Kastrup is against the normal kind of panpsychism (if the concept of  ‘normal panpsychism’ is admissible), his own view is essentially a more unusual variety.

Kastrup considers that we’re dealing with two aspects here; internal and external; our minds have both; the external is objective, the internal represents subjectivity. Why wouldn’t the world also have these two aspects? (Actually it’s hard to say why anything should have them, and we may suspect that by taking it as a given we’re in danger of smuggling half the mystery out of the problem, but let that pass.) Kastrup takes it as natural to conclude that the world as a whole must indeed have the two aspects (I think at this point he may have inadvertently ‘proved’ the existence of God in the form of a conscious cosmos, which is regrettable, but again let’s go with it for now); but not parts of the world. The brain, we know, has experience, but the groups of neurons that make it up do not (do we actually know that?); it follows that while the world as a whole has an internal aspect, objects or entities within it generally do not.

Yet of course, the brain manages to have two aspects, which must surely be something to do with the structure of the brain? May we not suspect that whatever it is that allows the brain to have an internal aspect, a machine could in principle have it too? I don’t think Kastrup engages effectively with this objection; his view seems to be that metabolism is essential, though why that should be, and why machines can’t have some form of metabolism, we don’t know.

The argument, then, doesn’t seem convincing, but it must be granted that Kastrup has an original and striking vision: our consciousnesses, he suggests, are essentially like the ‘alters’ of Dissociative Identity Disorder, better known as Multiple Personality, in which several different people seem to inhabit a single human being. We are, he says, like the accidental alternate identities of the Universe (again, I think you could say, of God, though Kastrup clearly doesn’t want to).

As with Kastrup’s condemnation of AI engineering, I don’t think at all that he is right, but it is a great idea. It is probable that in his book-length treatments of these ideas Kastrup makes a stronger case than I have given him credit for above, but I do in any case admire the originality of his thinking, and the clarity and force with which he expresses it.

knight 4This is the last of four posts about key ideas from my book The Shadow of Consciousness, and possibly the weirdest; this time the subject is reality.

Last time I suggested that qualia – the subjective aspect of experiences that gives them their what-it-is-like quality – are just the particularity, or haecceity, of real experiences. There is something it is like to see that red because you’re really seeing it; you’re not just understanding the theory, which is a cognitive state that doesn’t have any particular phenomenal nature. So we could say qualia are just the reality of experience. No mystery about it after all.

Except of course there is a mystery – what is reality? There’s something oddly arbitrary about reality; some things are real, others are not. That cake on the table in front of me; it could be real as far as you know; or it could indeed be that the cake is a lie. The number 47, though, is quite different; you don’t need to check the table or any location; you don’t need to look for an example, or count to fifty; it couldn’t have been the case that there was no number 47. Things that are real in the sense we need for haecceity seem to depend on events for their reality. I will borrow some terminology from Meinong and call that dependent or contingent kind of reality existence, while what the number 47 has got is subsistence.

What is existence, then? Things that exist depend on events, I suggested; if I made a cake and put it in the table, it exists; if no-one did that, it doesn’t. Real things are part of a matrix of cause and effect, a matrix we could call history. Everything real has to have causes and effects. We can prove that perhaps, by considering the cake’s continuing existence. It exists now because it existed a moment ago; if it had no causal effects, it wouldn’t be able to cause its own future reality, and it wouldn’t be here. If it wasn’t here, then it couldn’t have had preceding causes, so it didn’t exist in the past either. Ergo, things without causal effects don’t exist.

Now that’s interesting because of course, one of the difficult things about qualia is that they apparently can’t have causal effects. If so, I seem to have accidentally proved that they don’t exist! I think things get unavoidably complex here. What I think is going on is that qualia in general, the having of a subjective side, is bestowed on things by being real, and that reality means causal efficacy. However, particular qualia are determined by the objective physical aspects of things; and it’s those that give specific causal powers. It looks to us as if qualia have no causal effects because all the particular causal powers have been accounted for in the objective physical account. There seems to be no role for qualia. What we miss is that without reality nothing has causal powers at all.

Let’s digress slightly to consider yet again my zombie twin. He’s exactly like me, except that he has no qualia, and that is supposed to show that qualia are over and above the account given by physics. Now according to me that is actually not possible, because if my zombie twin is real, and physically just the same, he must end up with the same qualia. However, if we doubt this possibility, David Chalmers and others invite us at least to accept that he is conceivable. Now we might feel that whether we can or can’t conceive of a thing is a poor indicator of anything, but leaving that aside I think the invitation to consider the zombie twin’s conceivability draws us towards thinking of a conceptual twin rather than a real one. Conceptual twins – imaginary, counterfactual, or non-existent ones – merely subsist; they are not real and so the issue of qualia does not arise. The fact that imaginary twins lack qualia doesn’t prove what it was meant to; properly understood it just shows that qualia are an aspect of real experience.

Anyway, are we comfortable with the idea of reality? Not really, because the buzzing complexity and arbitrariness of real things seems to demand an explanation. If I’m right about all real things necessarily being part of a causal matrix, they are in the end all part of one vast entity whose curious firm should somehow be explicable.

Alas, it isn’t. We have two ways of explaining things. One is pure reason: we might be able to deduce the real world from first principles and show that it is logically necessary. Unfortunately pure reason alone is very bad at giving us details of reality; it deals only with Platonic, theoretical entities which subsist but do not exist. To tell us anything about reality it must at least be given a few real facts to work on; but when we’re trying to account for reality as a whole that’s just what we can’t provide.

The other kind of explanation we can give is empirical; we can research reality itself scientifically and draw conclusions. But empirical explanations operate only within the causal matrix; they explain one state of affairs in terms of another, usually earlier one. It’s not possible to account for reality itself this way.

It looks then, as if reality is doomed to remain at least somewhat mysterious, unless we somehow find a third way, neither empirical nor rational.

A rather downbeat note to end on, but sincere thanks to all those who have helped make the discussion so interesting so far…