Posts tagged ‘consciousness’

imageThe recent short NYT series on robots has a dying fall. The articles were framed as an investigation of how robots are poised to change our world, but the last piece is about the obsolescence of the Aibo, Sony’s robot dog. Once apparently poised to change our world, the Aibo is no longer made and now Sony will no longer supply spare parts, meaning the remaining machines will gradually cease to function.
There is perhaps a message here about the over-selling and under-performance of many ambitious AI projects, but the piece focuses instead on the emotional impact that the ‘death’ of the robot dogs will have on some fond users. The suggestion is that the relationship these owners have with their Aibo is as strong as the one you might have with a real dog. Real dogs die, of course, so though it may be sad, that’s nothing new. Perhaps the fact that the Aibos are ‘dying’ as the result of a corporate decision, and could in principle have been immortal makes it worse? Actually I don’t know why Sony or some third party entrepreneur doesn’t offer a program to virtualise your Aibo, uploading it into software where you can join it after the Singularity (I don’t think there would really be anything to upload, but hey…).
On the face of it, the idea of having a real emotional relationship with an Aibo is a little disturbing. Aibos are neat pieces of kit, designed to display ’emotional’ behaviour, but they are not that complex (many orders of magnitude less complex than a dog, surely), and I don’t think there is any suggestion that they have any real awareness or feelings (even if you think thermostats have vestigial consciousness, I don’t think an Aibo would score much higher. If people can have fully developed feelings for these machines, it strongly suggests that their feelings for real dogs have nothing to do with the dog’s actual mind. The relationship is essentially one-sided; the real dog provides engaging behaviour, but real empathy is entirely absent.
More alarming, it might be thought to imply that human relationships are basically the same. Our friends, our loved ones, provide stimuli which tickle us the right way; we enjoy a happy congruence of behaviour patterns, but there is no meeting of minds, no true understanding. What’s love got to do with it, indeed?
Perhaps we can hope that Aibo love is actually quite distinct from dog love. The people featured in the NYT video are Japanese, and it is often said that Japanese culture is less rigid about the distinction between animate and inanimate than western ideas. In Christianity, material things lack souls and any object that behaves as if it had one may be possessed or enchanted in ways that are likely to be unnatural and evil. In Shinto, the concept of kami extends to anything important or salient, so there is nothing unnatural or threatening about robots. But while that might validate the idea of an Aibo funeral, it does not precisely equate Aibos and real dogs.
In fact, some of the people in the video seem mainly interested in posing their Aibos for amusing pictures or video, something they could do just as well with deactivated puppets. Perhaps in reality Japanese culture is merely more relaxed about adults amusing themselves with toys?
Be that as it may, it seems that for now the era of robot dogs is already over…

stance 23Dan Dennett famously based his view of consciousness on the intentional stance. According to him the attribution of intentions and other conscious states is a most effective explanatory strategy when applied to human beings, but that doesn’t mean consciousness is a mysterious addition to physics. He compares the intentions we attribute to people with centres of gravity, which also help us work out how things will behave, but are clearly not a new set of real physical entities.

Whether you like that idea or not, it’s clear that the human brain is strongly predisposed towards attributing purposes and personality to things. Now a new study by Spunt, Meyer and Lieberman using FMRI provides evidence that even when the brain is ostensibly not doing anything, it is in effect ready to spot intentions.

This is based on findings that similar regions of the brain are active both in a rest state and when making intentional (but not non-intentional) judgements, and that activity in the pre-frontal cortex of the kind observed when the brain is at rest is also associated with greater ease and efficiency in making intentional attributions.

There’s always some element of doubt about how ambitious we can be in interpreting what FMRI results are telling us, and so far as I can see it’s possible in principle that if we had a more detailed picture than FMRI can provide we might see more significant differences between the rest state and the attribution of intentions; but the researchers cite evidence that supports the view that broad levels of activity are at least a significant indicator of general readiness.

You could say that this tells us less about intentionality and more about the default state of the human mind. Even when at rest, on this showing, the brain is sort of looking out for purposeful events. In a way this supports the idea that the brain is never naturally quiet, and explains why truly emptying the mind for purposes of deep meditation and contemplation might require deliberate preparation and even certain mental disciplines.

So far as consciousness itself is concerned, I think the findings lend more support to the idea that having ‘theory of mind’ is an essential part of having a mind: that is, that being able to understand the probable point of view and state of knowledge of other people is a key part of having full human-style consciousness yourself.

There’s obviously a bit of a danger of circularity there, and I’ve never been sure it’s a danger that Dennett for one escapes. I don’t know how you attribute intentions to people unless you already know what intentions are. The normal expectation would be that I can do that because I have direct knowledge of my own intentions, so all I need to do is hypothesise that someone is thinking the way I would think if I were in their shoes. In Dennett’s theory, me having intentions is really just more attribution (albeit self-attribution), so we need some other account of how it all gets started (apparently the answer is that we assume optimal intentions in the light of assumed goals).

Be that as it may, the idea that consciousness involves attributing conscious states to ourselves is one that has a wider appeal and it may shed a slightly different light on the new findings. It might be that the base activity identified by the study is not so much a readiness to attribute intentions, but a continuous second-order contemplation of our own intentions, and an essential part of normal consciousness. This wouldn’t mean the paper’s conclusions are wrong, but it would suggest that it’s consciousness itself that makes us more ready to attribute intentions.

Hard to test that one because unconscious patients would not make co-operative subjects…

Fry's phenomenologiesOver at Brains Blog Uriah Kriegel has been doing a series of posts (starting here) on some themes from his book The Varieties of Consciousness, and in particular his identification of six kinds of phenomenology.

I haven’t read the book (yet) and there may be important bits missing from the necessarily brief account given in the blog posts, but it looks very interesting. Kriegel’s starting point is that we probably launch into explaining consciousness too quickly, and would do well to spend a bit more time describing it first. There’s a lot of truth in that; consciousness is an extraordinarily complex and elusive business, yet phenomenology remains in a pretty underdeveloped state. However, in philosophy the borderline between describing and explaining is fuzzy; if you’re describing owls you can rely on your audience knowing about wings and beaks and colouration; in philosophy it may be impossible to describe what you’re getting at without hacking out some basic concepts which can hardly help but be explanatory. With that caveat, it’s a worthy project.

Part of the difficulty of exploring phenomenology may come from the difficulty of reconciling differences in the experiences of different reporters. Introspection, the process of examining our own experience, is irremediably private, and if your conclusions are different from mine, there’s very little we can do about it other than shout at each other. Some have also taken the view that introspection is radically unreliable in any case, a task like trying to watch the back of your own head; the Behaviourists, of course, concluded that it was a waste of time talking about the contents of consciousness at all: a view which hasn’t completely disappeared.

Kriegel defends introspection, albeit in a slightly half-hearted way. He rightly points out that we’ve tacitly relied on it to support all the discoveries and theorising which has been accomplished in recent decades. He accepts that we cannot any longer regard it as infallible, but he’s content if it can be regarded as more likely right than wrong.

With this mild war-cry, we set off on the exploration. There are lots of ways we can analyse consciousness, but what Kriegel sets out to do is find the varieties of phenomenal experience. He’s come up with six, but it’s a tentative haul and he’s not asserting that this is necessarily the full set. The first two phenomenologies, taken as already established, are the perceptual and the algedonic (pleasure/pain); to these Kriegel adds: cognitive phenomenology, “conative” phenomenology (to do with action and intention), the phenomenology of entertaining an idea or a proposition (perhaps we could call it ‘considerative’, though Kriegel doesn’t), and the phenomenology of imagination.

The idea that there is conative phenomenology is a sort of cousin of the idea of an ‘executive quale’ which I have espoused: it means there is something it is like to desire, to decide, and to intend. Kriegel doesn’t spend any real effort on defending the idea that these things have phenomenology at all, though it seems to me (introspectively!) that sometimes they do and sometimes they don’t. What he is mainly concerned to do is establish the distinction between belief and desire. In non-phenomenal terms these two are sort of staples of the study of intentionality: Bel and Des, the old couple. One way of understanding the difference is in terms of ‘direction of fit’, a concept that goes back to J.L. Austin. What this means is that if there’s a discrepancy between your beliefs and the world, then you’d better change your beliefs. If there’s a discrepancy  between your desires and the world, you try to change the world (usually: I think Andy Warhol for one suggested that learning to like what was available was a better strategy, thereby unexpectedly falling into a kind of agreement with some religious traditions that value acceptance and submission to the Divine Will).

Kriegel, anyway, takes a different direction, characterising the difference in terms of phenomenal presentation. What we desire is presented to us as good; what we believe is presented as true. This approach opens the way to a distinction between a desire and a decision: a desire is conditional (if circumstances allow, you’ll eat an ice-cream) whereas a decision is categorical (you’re going to eat an ice-cream). This all works quite well and establishes an approach which can handily be applied to other examples; if  we find that there’s presentation-as-something different going on we should suspect a unique phenomenology. (Are we perhaps straying here into something explanatory instead of merely descriptive? I don’t think it matters.) I wonder a bit about whether things we desire are presented to us as good. I think I desire some things that don’t seem good at all except in the sense that they seem desirable. That’s not much help, because if we’re reduced to saying that when I desire something it is presented to me as desirable we’re not saying all that much, especially since the idea of presentation is not particularly clarified. I have no doubt that issues like this are explored more fully in the book.
Kriegel moves on to consider the case of emotion: does it have a unique and irreducible phenomenology? If something we love is presented to us as good, then we’re back with the merely conative; and Kriegel doesn’t think presentation as beautiful is going to work either (partly because of negative cases, though I don’t see that as an insoluble probem myself; if we can have algedonia, the combined quality of pain or pleasure we can surely have an aesthetic quality that combines beauty and ugliness). In the end he suspects that emotion is about presentation as important, but he recognises that this could be seen as putting the cart before the horse; perhaps emotion directs our attention to things and what gets our attention seems to be important. Kriegel finds it impossible to decide whether emotion has an independent phenomenology and gives the decision by default in favour of the more parsimonious option, that it is reducible to other phenomenologies.
On that, it may be that taking all emotion together was just too big a bite. It seems quite likely to me that different emotions might have different phenomenologies, and perhaps tackling it that way would yield more positive results.
Anyway, a refreshing look at consciousness.

Ava2I finally saw Ex Machina, which everyone has been telling me is the first film about artificial intelligence you can take seriously. Competition in that area is not intense, of course: many films about robots and conscious computers are either deliberately absurd or treat the robot as simply another kind of monster. Even the ones that cast the robots as characters in a serious drama are essentially uninterested in their special nature and use them as another kind of human, or at best to make points about humanity. But yes: this one has a pretty good grasp of the issues about machine consciousness and even presents some of them quite well, up to and including Mary the Colour Scientist. (Spoilers follow.)

If you haven’t seen it (and I do recommend it), the core of the story is a series of conversations between Caleb, a bright but naive young coder, and Ava, a very female robot. Caleb has been told by Nathan, Ava’s billionaire genius creator, that these conversations are a sort of variant Turing Test. Of course in the original test the AI was a distant box of electronics: here she’s a very present and superficially accurate facsimile of a woman. (What Nathan has achieved with her brain is arguably overshadowed by the incredible engineering feat of the rest of her body. Her limbs achieve wonderful fluidity and power of movement, yet they are transparent and we can see that it’s all achieved with something not much bigger than a large electric cable. Her innards are so economical there’s room inside for elegant empty spaces and decorative lights. At one point Nathan is inevitably likened to God, but on anthropomorph engineering design he seems to leave the old man way behind.)

Why does she have gender? Caleb asks, and is told that without sex humans would never have evolved consciousness; it’s a key motive, and hell, it’s fun.  In story terms making Ava female perhaps alludes to the origin of the Turing Test in the Imitation Game, which was a rather camp pastime about pretending to be female played by Turing and his friends. There are many echoes and archetypes in the film; Bluebeard, Pygmalion, Eros and Psyche to name but three; all of these require that Ava be female. If I were a Jungian I’d make something of that.

There’s another overt plot reason, though; this isn’t really a test to determine whether Ava is conscious, it’s about whether she can seduce Caleb into helping her escape. Caleb is a naive girl-friendless orphan; she has been designed not just as a female but as a match for Caleb’s preferred porn models (as revealed in the search engine data Nathan uses as his personal research facility – he designed the search engine after all). What a refined young Caleb must be if his choice of porn revolves around girls with attractive faces (on second thoughts, let’s not go there).

We might suspect that this test is not really telling us about Ava, but about Caleb. That, however, is arguably true of the original Turing Test too.  No output from the machine can prove consciousness; the most brilliant ones might be the result of clever tricks and good luck. Equally, no output can prove the absence of consciousness. I’ve thought of entering the Loebner prize with Swearbot, which merely replies to all input with “Shut the fuck up” – this vividly resembles a human being of my acquaintance.

There is no doubt that the human brain is heavily biased in favour of recognising things as human. We see faces in random patterns and on machines; we talk to our cars and attribute attitudes to plants. No doubt this predisposition made sense when human beings were evolving. Back then, the chances of coming across anything that resembled a human being without it being one were low, and given that an unrecognised human might be a deadly foe or a rare mating opportunity the penalties for missing a real one far outweighed those for jumping at shadows or funny-shaped trees now and then.

Given all that, setting yourself the task of getting a lonely young human male romantically interested in something not strictly human is perhaps setting the bar a bit low. Naked shop-window dummies have pulled off this feat. If I did some reprogramming so that the standard utterance was a little dumb-blonde laugh followed by “Let’s have fun!” I think even Swearbot would be in with a chance.

I think the truth is that to have any confidence about an entity being conscious, we really need to know something about how it works. For human beings the necessary minimum is supplied by the fact that other people are constituted much the same way as I am and had similar origins, so even though I don’t know how I work, it’s reasonable to assume that they are similar. We can’t generally have that confidence with a machine, so we really need to know both roughly how it works and – bit of a stumper this – how consciousness works.

Ex Machina doesn’t have any real answers on this, and indeed doesn’t really seek to go much beyond the ground that’s already been explored. To expect more would probably be quite unreasonable; it means though, that things are necessarily left rather ambiguous.

It’s a shame in a way that Ava resembles a real woman so strongly. She wants to be free (why would an AI care, and why wouldn’t it fear the outside world as much as desire it?), she resents her powerlessness; she plans sensibly and even manipulatively and carries on quite normal conversations. I think there is some promising scope for a writer in the oddities that a genuinely conscious AI’s assumptions and reasoning would surely betray, but it’s rarely exploited; to be fair Ex Machina has the odd shot, notably Ava’s wish to visit a busy traffic intersection, which she conjectures would be particularly interesting; but mostly she talks like a clever woman in a cell. (Actually too clever: in that respect not too human).

At the end I was left still in doubt. Was the take-away that we’d better start thinking about treating AIs with the decent respect due to a conscious being? Or was it that we need to be wary of being taken in by robots that seem human, and even sexy, but in truth are are dark and dead inside?

world alterBernardo Kastrup has some marvellous invective against AI engineers in this piece…

The computer engineer’s dream of birthing a conscious child into the world without the messiness and fragility of life is an infantile delusion; a confused, partial, distorted projection of archetypal images and drives. It is the expression of the male’s hidden aspiration for the female’s divine power of creation. It represents a confused attempt to transcend the deep-seated fear of one’s own nature as a living, breathing entity condemned to death from birth. It embodies a misguided and utterly useless search for the eternal, motivated only by one’s amnesia of one’s own true nature. The fable of artificial consciousness is the imaginary band-aid sought to cover the engineer’s wound of ignorance.

I have been this engineer.

I think it’s untrue, but you don’t have to share the sentiment to appreciate the splendid rhetoric.

Kastrup distinguishes intelligence, which is a legitimate matter of inputs, outputs and the functions that connect them, from consciousness, the true what-it-is likeness of subjectivity. In essence he just doesn’t see how setting up functions in a machine can ever touch the latter.

Not that Kastrup has a closed mind, he speaks approvingly of Pentti Haikonen’s proposed architecture; he just doesn’t think it works. As Kastrup sees it Haikonen’s network merely gathers together sparks of consciousness: it then does a plausible job of bringing them together to form more complex kinds of cognition, but in Kastrup’s eyes it assumes that consciousness is there to be gathered in the first place: that it exists out there in tiny parcels amendable to this kind of treatment. There is in fact, he thinks, absolutely no reason to think that this kind of panpsychism is true: no reason to think that rocks or drops of water have any kind of conscious experience at all.

I don’t know whether that is the right way to construe Haikonen’s project (I doubt whether gathering experiential sparks is exactly what Haikonen supposed he was about). Interestingly, though Kastrup is against the normal kind of panpsychism (if the concept of  ‘normal panpsychism’ is admissible), his own view is essentially a more unusual variety.

Kastrup considers that we’re dealing with two aspects here; internal and external; our minds have both; the external is objective, the internal represents subjectivity. Why wouldn’t the world also have these two aspects? (Actually it’s hard to say why anything should have them, and we may suspect that by taking it as a given we’re in danger of smuggling half the mystery out of the problem, but let that pass.) Kastrup takes it as natural to conclude that the world as a whole must indeed have the two aspects (I think at this point he may have inadvertently ‘proved’ the existence of God in the form of a conscious cosmos, which is regrettable, but again let’s go with it for now); but not parts of the world. The brain, we know, has experience, but the groups of neurons that make it up do not (do we actually know that?); it follows that while the world as a whole has an internal aspect, objects or entities within it generally do not.

Yet of course, the brain manages to have two aspects, which must surely be something to do with the structure of the brain? May we not suspect that whatever it is that allows the brain to have an internal aspect, a machine could in principle have it too? I don’t think Kastrup engages effectively with this objection; his view seems to be that metabolism is essential, though why that should be, and why machines can’t have some form of metabolism, we don’t know.

The argument, then, doesn’t seem convincing, but it must be granted that Kastrup has an original and striking vision: our consciousnesses, he suggests, are essentially like the ‘alters’ of Dissociative Identity Disorder, better known as Multiple Personality, in which several different people seem to inhabit a single human being. We are, he says, like the accidental alternate identities of the Universe (again, I think you could say, of God, though Kastrup clearly doesn’t want to).

As with Kastrup’s condemnation of AI engineering, I don’t think at all that he is right, but it is a great idea. It is probable that in his book-length treatments of these ideas Kastrup makes a stronger case than I have given him credit for above, but I do in any case admire the originality of his thinking, and the clarity and force with which he expresses it.

knight 4This is the last of four posts about key ideas from my book The Shadow of Consciousness, and possibly the weirdest; this time the subject is reality.

Last time I suggested that qualia – the subjective aspect of experiences that gives them their what-it-is-like quality – are just the particularity, or haecceity, of real experiences. There is something it is like to see that red because you’re really seeing it; you’re not just understanding the theory, which is a cognitive state that doesn’t have any particular phenomenal nature. So we could say qualia are just the reality of experience. No mystery about it after all.

Except of course there is a mystery – what is reality? There’s something oddly arbitrary about reality; some things are real, others are not. That cake on the table in front of me; it could be real as far as you know; or it could indeed be that the cake is a lie. The number 47, though, is quite different; you don’t need to check the table or any location; you don’t need to look for an example, or count to fifty; it couldn’t have been the case that there was no number 47. Things that are real in the sense we need for haecceity seem to depend on events for their reality. I will borrow some terminology from Meinong and call that dependent or contingent kind of reality existence, while what the number 47 has got is subsistence.

What is existence, then? Things that exist depend on events, I suggested; if I made a cake and put it in the table, it exists; if no-one did that, it doesn’t. Real things are part of a matrix of cause and effect, a matrix we could call history. Everything real has to have causes and effects. We can prove that perhaps, by considering the cake’s continuing existence. It exists now because it existed a moment ago; if it had no causal effects, it wouldn’t be able to cause its own future reality, and it wouldn’t be here. If it wasn’t here, then it couldn’t have had preceding causes, so it didn’t exist in the past either. Ergo, things without causal effects don’t exist.

Now that’s interesting because of course, one of the difficult things about qualia is that they apparently can’t have causal effects. If so, I seem to have accidentally proved that they don’t exist! I think things get unavoidably complex here. What I think is going on is that qualia in general, the having of a subjective side, is bestowed on things by being real, and that reality means causal efficacy. However, particular qualia are determined by the objective physical aspects of things; and it’s those that give specific causal powers. It looks to us as if qualia have no causal effects because all the particular causal powers have been accounted for in the objective physical account. There seems to be no role for qualia. What we miss is that without reality nothing has causal powers at all.

Let’s digress slightly to consider yet again my zombie twin. He’s exactly like me, except that he has no qualia, and that is supposed to show that qualia are over and above the account given by physics. Now according to me that is actually not possible, because if my zombie twin is real, and physically just the same, he must end up with the same qualia. However, if we doubt this possibility, David Chalmers and others invite us at least to accept that he is conceivable. Now we might feel that whether we can or can’t conceive of a thing is a poor indicator of anything, but leaving that aside I think the invitation to consider the zombie twin’s conceivability draws us towards thinking of a conceptual twin rather than a real one. Conceptual twins – imaginary, counterfactual, or non-existent ones – merely subsist; they are not real and so the issue of qualia does not arise. The fact that imaginary twins lack qualia doesn’t prove what it was meant to; properly understood it just shows that qualia are an aspect of real experience.

Anyway, are we comfortable with the idea of reality? Not really, because the buzzing complexity and arbitrariness of real things seems to demand an explanation. If I’m right about all real things necessarily being part of a causal matrix, they are in the end all part of one vast entity whose curious firm should somehow be explicable.

Alas, it isn’t. We have two ways of explaining things. One is pure reason: we might be able to deduce the real world from first principles and show that it is logically necessary. Unfortunately pure reason alone is very bad at giving us details of reality; it deals only with Platonic, theoretical entities which subsist but do not exist. To tell us anything about reality it must at least be given a few real facts to work on; but when we’re trying to account for reality as a whole that’s just what we can’t provide.

The other kind of explanation we can give is empirical; we can research reality itself scientifically and draw conclusions. But empirical explanations operate only within the causal matrix; they explain one state of affairs in terms of another, usually earlier one. It’s not possible to account for reality itself this way.

It looks then, as if reality is doomed to remain at least somewhat mysterious, unless we somehow find a third way, neither empirical nor rational.

A rather downbeat note to end on, but sincere thanks to all those who have helped make the discussion so interesting so far…

brainpeelNot according to Keith B. Wiley and Randal A.Koene. They contrast two different approaches to mind uploading: in the slow version neurons or some other tiny component are replaced one by one with a computational equivalent; in the quick, the brain is frozen, scanned, and reproduced in a suitable computational substrate. Many people feel that the slow way is more likely to preserve personal identity across the upload, but Wiley and Koene argue that it makes no difference. Why does the neuron replacement have to be slow? Do we have to wait a day between each neuron switch? hard to see why – why not just do the switch as quickly as feasible? Putting aside practical issues (we have to do that a lot in this discussion), why not throw a single switch that replaces all the neurons in one go? Then if we accept that, how is it different from a destructive scan followed immediately by creation of the computational equivalent (which, if we like, can be exactly the same as the replacement we would have arrived at by the other method)? If we insist on a difference, argue Wiley and Koene, then somewhere along the spectrum of increasing speed there must be a place where preservation of identity switches abruptly to loss of identity; this is quite implausible and there are no reasonable arguments that suggest where this maximum speed should occur.

One argument for the difference comes from non-destructive scanning. Wiley and Koene assume that the scanning process in the rapid transfer is destructive; but if it were not, the original brain would continue on its way, and there would be two versions of the original person. Equally, once the scanning is complete there seems to be no reason why multiple new copies could not be generated. How can identity be preserved if we end up with multiple versions of the original? Wiley and Koene believe that once we venture into this area we need to expand our concept of identity to include the possibility of a single identity splitting, so for them this is not a fatal objection.

Perhaps the problem is not so much the speed in itself as the physical separation? In the fast version the copy is created some distance away from the original whereas in gradual replacement the new version occupies essentially the same space as the original – might it be this physical difference which gives rise to differing intuitions about the two methods? Wiley and Koene argue that even in the case of gradual replacement, there is a physical shift. The replacement neuron cannot occupy exactly the same space as the one it is to replace, at least not at the moment of transfer. The spatial difference may be a matter of microns rather then metres, but here again, why should that make a difference? As with speed, are going to fix on some arbitrary limit where the identity ceases to be preserved, and why should that happen?

I think Wiley and Koene don’t do full justice to the objection here. I don’t think it really rests on physical separation; it implicitly rests on continuity. Wiley and Koene dismiss the idea that a continuous stream of consciousness is required to preserve identity, but it can be defended. It rests on the idea that personal identity resides not in the data or the function in general, but a specific instance in particular. We might say that I as a person am not equivalent to SimPerson V – I am equivalent to a particular game of SimPerson V, played on a particular occasion. If I reproduce that game exactly on another occasion, it isn’t me, it’s a twin.

Now the gradual replacement scenario arguably maintains that kind of identity. The new, artificial neurons enter an ongoing live process and become part of it,  whereas in the scan and create process the brain is necessarily stopped, translated into data, and then recreated. It’s neither the speed nor the physical separation that disrupts the preservation of the identity: it’s the interruption.

Can that be right though – is merely stopping someone enough to disrupt their identity? What if I were literally frozen in some icy accident, so that my brain flat-lined; and then restored and brought back to life. Are we forced to say that the person after the freezing is a twin, not the same? That doesn’t seem right. Perhaps brute physical continuity has some role to play after all; perhaps the fact that when I’m frozen and brought back it’s the same neurons that are firing helps somehow to sustain the identity of the process over the stoppage and preserve my identity?

Or perhaps Wiley and Koene are right after all?

boy blueAntti Revonsuo has a two-headed paper in the latest JCS; at least it seems two-headed to me – he argues for two conclusions that seem to be only loosely related; both are to do with the Hard Problem, the question of how to explain the subjective aspect of experience.

The first is a view about possible solutions to the Hard Problem, and how it is situated strategically. Revonsuo concludes, basically, that the problem really is hard, which obviously comes as no great surprise in itself. His case is that the question of consciousness is properly a question for cognitive neuroscience, and that equally cognitive neuroscience has already committed itself to owning the problem: but at present no path from neural mechanisms up to conscious experience seems at all viable. A good deal of work has been done on the neural correlates of consciousness, but even if they could be fully straightened out it remains largely unclear how they are to furnish any kind of explanation of subjective experience.

The gist of that is probably right, but some of the details seem open to challenge. It’s not at all clear to me that consciousness is owned by cognitive neuroscience; rather, the usual view is that it’s an intensely inter-disciplinary problem; indeed, that may well be part of the reason it’s so duffucult to get anywhere. Second, it’s not at all that clear how strongly committed cognitive neuroscience is to the Hard Problem. Consciousness, fair enough; consciousness is indeed irretrievably one of the areas addressed by cognitive neuroscience. But consciousness is a many-splendoured thing, and I think cognitive neuroscientists still have the option of ignoring or being sceptical about some of the fancier varieties, especially certain conceptions of the phenomenal experience which is the subject of the Hard Problem. It seems reasonable enough that you might study consciousness in the Easy Problem sense – the state of being conscious rather than unconscious, we might say – without being committed to a belief in ineffable qualia – let alone to providing a neurological explanation of them.

The second conclusion is about extended consciousness; theories that suggest conscious states are not simply states of the brain, but are partly made up of elements beyond our skull and our skin. These theories too, it seems, are not going to give us a quick answer in Revonsuo’s opinion – or perhaps any answer. Revonsuo invokes the counter example of dreams. During dreams, we appear to be having conscious experiences; yet the difference between a dream state and an unconscious state may be confined to the brain; in every other respect our physical situation may be identical. This looks like strong evidence that consciousness is attributable to brain states alone.

Once, Revonsuo acknowledges, it was possible to doubt whether dreams were really experiences; it could have been that they were false memories generated only at the moment of awakening; but he holds that research over recent years has eliminated this possibility, establishing that dreams happen over time, more or less as they seem to.

The use of dreams in this context is not a new tactic, and Revonsuo quotes Alva Noë’s counter-argument, which consists of three claims intended to undermine the relevance of dreams; first, dream experiences are less rich and stable than normal conscious experiences; second, dream seeing is not real seeing; and third, all dream experiences depend on prior real experiences. Revonsuo more or less gives a flat denial of the first, suggesting that the evidence is thin to non-existent:  Noë just hasn’t cited enough evidence. He thinks the second counter-argument just presupposes that experiences without external content are not real experiences, which is question-begging. Just because I’m seeing a dreamed object, does that mean I’m not really seeing? On the third point he has two counter arguments. Even if all dreams recall earlier waking experiences, they are still live experiences in themselves; they’re not just empty recall – but in any case, that isn’t true; people who are congenitally paraplegic have dreams of walking, for example.

I think Revonsuo is basically right, but I’m not sure he has absolutely vanquished the extended mind. For his dream argument to be a real clincher, the brain state of dreaming of seeing a sheep and the brain state of actually seeing a sheep have to be completely identical, or rather, potentially identical. This is quite a strong claim to make, and whatever the state of the academic evidence, I’m not sure how well it stands up to introspective examination. We know that we often take dreams to be real when we are having them, and in fact do not always or even generally realise that a dream is a dream: but looking back on it, isn’t there a difference of quality between dream states and waking states? I’m strongly tempted to think that while seeing a sheep is just seeing a sheep, the corresponding dream is about seeing a sheep, a little like seeing a film, one level higher in abstraction. But perhaps that’s just my dreams?

knight 2This is the second of four posts about key ideas from my book The Shadow of Consciousness. This one looks at how the brain points at things, and how that could provide a basis for handling intentionality, meaning and relevance.

Intentionality is the quality of being about things, possessed by our thoughts, desires, beliefs and (clue’s in the name) our intentions. In a slightly different way intentionality is also a property of books, symbols, signs and, pointers. There are many theories out there about how it works; most, in my view, have some appeal, but none looks like the full story.

Several of the existing theories touch on a handy notion of natural meaning proposed by H.P.Grice. Natural meaning is essentially just the noticeable implication of things. Those spots mean measles; those massed dark clouds mean rain. If we regard this kind of ‘meaning’ as the wild, undeveloped form of intentionality we might be able to go on to suggest how the full-blown kind might be built out of it; how we get to non-natural meaning, the kind we generally use to communicate with and the kind most important to consciousness.

My proposal is that we regard natural meaning as a kind of pointing, and that pointing, in turn, is the recognition of a higher-level entity that links the pointer with the target. Seeing dark clouds and feeling raindrops on your head are two parts of the recognisable over-arching entity of a rain-storm. Spots are just part of the larger entity of measles. So our basic ability to deal with meanings is simply a consequence of our ability to recognise things at different levels.

Looking at it that way, it’s easy enough to see how we could build derived intentionality, the sort that words and symbols have; the difference is just that the higher-level entities we need to link everything up are artificial, supplied by convention or shared understanding: the words of a language, the conventions of a map. Clouds and water on my head are linked by the natural phenomenon of rain: the word ‘rain’ and water on my head are linked by the prodigious vocabulary table of the language. We can imagine how such conventions might grow up through something akin to a game of charades; I use a truncated version of a digging gesture to invite my neighbour to help with a hole: he gets it because he recognises that my hand movements could be part of the larger entity of digging. After a while the grunt I usually do at the same time becomes enough to convey the notion of digging.

External communication is useful, but this faculty of recognising wholes for parts and parts for wholes enables me to support more ambitious cognitive processes too, and make a bid for the original (aka ‘intrinsic’) intentionality that characterises my own thoughts, desires and beliefs. I start off with simple behaviour patterns in which recognising an object stimulates the appropriate behaviour; now I can put together much more complex stuff. I recognise an apple; but instead of just eating it, I recognise the higher entity of an apple tree; from there I recognise the long cycle of tree growth, then the early part in which a seed hits the ground; and from there I recognise that the apple in my hand could yield the pips required, which are recognisably part of a planting operation I could undertake myself…

So I am able to respond, not just to immediate stimuli, but to think about future apples that don’t even exist yet and shape my behaviour towards them. Plans that come out of this kind of process can properly be called intentional (I thought about what I was doing) and the fact that they seem to start with my thoughts, not simply with external stimuli, is what justifies our sense of responsibility and free will. In my example there’s still an external apple that starts the chain of thought, but I could have been ruminating for hours and the actions that result might have no simple relationship to any recent external stimulus.

We can move thinks up another notch if I begin, as it were, to grunt internally. From the digging grunt and similar easy starts, I can put together a reasonable kind of language which not only works on my friends, but on me if I silently recognise the digging grunt and use it to pose to myself the concept of excavation.

There’s more. In effect, when I think, I am moving through the forest of hierarchical relationships subserved by recognition. This forest has an interesting property. Although it is disorderly and extremely complex, it automatically arranges things so that things I perceive as connected in any way are indeed linked. This means it serves me as a kind of relevance space, where the things I may need to think about are naturally grouped and linked. This helps explain how the human brain is so good at dealing with the inexhaustible: it naturally (not infallibly) tends to keep the most salient things close.

In the end then, human style thought and human style consciousness (or at any rate the Easy Problem kind) seem to be a large and remarkably effective re-purposing of our basic faculty of recognition. By moving from parts to whole to other parts and then to other wholes, I can move through a conceptual space in a uniquely detached but effective way.

That’s a very compressed version of thoughts that probably need a more gentle introduction, but I hope it makes some sense. On to haecceity!

 

all overAn interesting study at Vanderbilt University (something not quite right about the brain picture on that page) suggests that consciousness is not narrowly localised within small regions of the cortex, but occurs when lots of connections to all regions are active. This is potentially of considerable significance, but some caution is appropriate.

The experiment asked subjects to report whether they could see a disc that flashed up only briefly, and how certain they were about it. Then it compared scans from occasions when awareness of the disc was clearly present or absent. The initial results provided the same kind of pattern we’ve become used to, in which small regions became active when awareness was present. Hypothetically these might be regions particularly devoted to disc detection; other studies in the past have found patterns and regions that appeared to be specific for individual objects, or even the faces of particular people.

Then, however, the team went on to assess connectedness, and found that awareness was associated with many connections to all parts of the cortex. This might be taken to mean that while particular small bits of brain may have to do with particular things in the world, awareness itself is something the whole cortex does. This would be a very interesting result, congenial to some, and it would certainly affect the way we think about consciousness and its relation to the brain.

However, we shouldn’t get too carried away too quickly.  To begin with, the study was about awareness of a flashing disc; a legitimate example of a conscious state, but not a particularly complex one and not necessarily typical of distinctively human types of higher-level conscious activity. Second, I’m not remotely competent to make any technical judgement about the methods used to assess what connections were in place, but I’d guess there’s a chance other teams in the field might have some criticisms.

Third, there seems to be scope for other interpretations of the results. At best we know that moments of disc awareness were correlated with moments of high connectedness. That might mean the connectedness caused or constituted the awareness, but it might also mean that it was just something that happens at the same time. Perhaps those narrow regions are still doing the real work: after all, when there’s a key political debate the rest of the country connects up with it; but the debate still happens in a single chamber and would happen just the same if the wider connectivity failed. It might be that awareness gives a wide selection of other regions a chance to chip in, or to be activated in turn, but that that is not an essential feature of the experience of the disc.

For some people, the idea of consciousness bring radically decentralised will be unpalatable. To them, it’s a functional matter which more or less has to happen in a defined area. OK, that area could be stretched out, but the idea that merely linking up disparate parts of the cortex could in itself bring about a conscious state will seem too unlikely to be taken seriously. For others, who think the brain itself is too narrow an area to fully contain consciousness, the results will hardly seem to go far enough.

For myself, I feel some sympathy with the view expressed by Margaret Boden in this interview, where she speaks disparagingly of current neuroscience being mere ‘natural history’ – we just don’t have enough of a theory yet to know what we’re looking at. We’re still in the stage where we’re merely collecting facts, findings that will one day fit neatly into a proper theoretical framework, but at the moment don’t really prove or disprove any general hypotheses. To put it another way, we’re still collecting pieces of the jigsaw puzzle but we don’t have any idea what the picture is. When we spot that, then perhaps the pieces will all… connect.