Archive for May, 2009

Picture: star. The JCS has devoted its latest issue to definitions of consciousness. I thought I’d done reasonably well by quoting seventeen different views, but Ram L. P. Vimal lists forty, in what he acknowledges is not a comprehensive list. There is much to be said about all this – and Bill Faw promises a book-length treatment of the thoughts offered in his paper – but much of the ground has been trodden before.

A notable exception is David Skrbina’s panpsychist view. I have been accused in the past of being unfair to panpsychism, the belief that everything has some mental or experiential properties, and I remain unconvinced, but I was genuinely interested in hearing how a panpsychist would define consciousness. I think  panpsychists, who believe awareness of some kind is a fundamental property of everything, face a particular challenge in defining exactly what consciousness. For one thing they don’t enjoy the advantage which the rest of us have of being able to contrast the mindless stuff around us with mindful brains – for panpsychists there is no mindless stuff.  But sometimes it’s coming at a problem from a strange new angle that yields useful insights.

Skrbina very briefly puts a case for panpsychism by noting that even rocks maintain their own existence with a degree of success and respond to the impacts and changes of their environment.  This amounts, he suggests, to at least a simple form of experience, and hence of mind. But mind, he says,  has two aspects: the inner phenomenal experience and an outward-facing intentional/relational aspect. Both of these are characteristic of the mental life of all things; he acknowledges at least a prima facie difficulty over what counts as a ‘thing’ here, but it includes such entities as atoms, rocks, tables, chairs, human beings, planets, and stars.  In a footnote, Skrbina cites Plato and Aristotle as allies in thinking that stars might have a mental life, together with JBS Haldane’s view that the interior of stars might shelter minds superior to our own (perhaps not quite the same view – the existence of minds within stars doesn’t imply that the stars themselves have minds any more than the existence of minds in France suggests that France has its own mentality) and Roger Penrose who apparently has speculated that neutron stars may sustain large quantum superpositions and thus conceivably a high intensity of consciousness.

Skrbina does not, of course, believe that rocks have minds exactly like our own, and suggests that material complexity corresponds with mental complexity, so that there is a spectrum of mental life from the feeble, unremembered glimmerings experienced by rocks all the way up to the fantastically elaborate and persistent mental evolutions hosted by human beings. This is convenient, since it allows Skrbina to find a place for subconscious and unconscious mental activity, which can be regarded as merely low-wattage mentality, whereas on the face of it panpsychism seems to make unconsciousness impossible. But, he says, there is a fundamental continuity, and this applies to consciousness as well as general mentality. Consciousness, he suggests, is the border, the interface between the inward and outward aspects of mentality, and since everything posesses both of those, everything must have at least a simple analogue of consciousness. It might be better, he suggests, if we could find a new word for this common property of consciousness and reserve the term itself for the human-style variety, since that would accord better with normal usage, but we are nevertheless talking about a spectrum of complexity, not two different things.

Skrbina’s exposition is brief, and he only claims to be providing a pointer toward a promising line of investigation. The idea of consciousness as the linkage or interface between inner and outer mentality does have some appeal. Skrbina’s distinction between inner and outer corresponds approximately to a view which is widely popular about there being two basic kinds of consciousness;  the phenomenal, experiental variety and the rest. Famously this kind of distinction is embodied in David Chalmers’ hard/easy problem distinction and Ned Block’s a-consciousness and p-consciousness, to name only two examples; the pieces in the JCS provide other variations.  Why not regard consciousness as the thing that brings them together, even if you’re not attracted by panpsychism?

Well, I don’t know. For one thing I think the non-phenomenal half of the mind is usually short-changed.  Besides phenomenal awareness, we ought also to distinguish between agency, intentionality, and understanding, all large mysteries which really deserve better than being smooshed together. We could still see consciousness as the thing that brings it all together, perhaps, but that doesn’t exactly appeal either: it seems too much like saying that the human body is the thing that holds our bones and muscles together; better to say it’s the thing they help to make up.

I must confess – and this perhaps is unfair – to being put off by Skrbina’s description of consciousness as the luminous upper layer of the mind. Apart from the slightly confusing geometry (it’s the upper layer of the mind, but between the inner and outer parts), I don’t see why it’s luminous, and that sounds a bit like the resort to poetry sometimes adopted by theologians who have run out of cogent points to make. Still, he deserves at least a couple of cheers for offering a new approach, something he rightly advocates.

Picture: walker. I was agreeably surprised by Andy Clark’s ‘Supersizing the Mind’.  I had assumed it would be a fuller treatment of the themes set out in ‘The Extended Mind’, the paper he wrote with David Chalmers, and which is included in the book as an Appendix. In fact, it ranges more widely and has a number of interesting points to make on the general significance of embodiment and mind extension.  Various flavours of externalism, the doctrine that the mind ain’t in the head, seem to be popular at the moment, but Clark’s philosophical views are clearly just part of a coherent general outlook on cognition.

Early on, Clark contrasts different approaches to the problem of robotic walking. Asimo, Honda’s famous robot, does a pretty impressive job of walking, but achieves it through an awful lot of very carefully controlled mechanical kit. The humanoid shape of the body, apart from giving the robot its anthropomorphic appeal, is treated as more or less an incidental feature. By contrast, Clark cites Collins, Wisse and Ruina on passive dynamic walkers whose design gives them a natural tendency to waddle along with no control system at all.  These machines are far less complex (one walking toy patented by Fallis in 1888 consists entirely of two pieces of cleverly bent wire – I’m going to try making my own later), but they embody a different kind of sophistication.  As always, evolution got there first, and the relatively good levels of energy efficiency achieved by the ambulant human body, in contrast to the energy-guzzling needs of an Asimo-style approach, show that although human walking is more controlled than two pieces of wire, walking has been cunningly built into the design.

All this serves to introduce the idea that form has more importance than we may think, that legs are not just tools strapped on to the bottom of a standard unit.  The example of feeling with a stick is often quoted; when we use a stick to probe the texture and shape of an object, it doesn’t feel as if we’re registering the motion of the stick against our hand, but rather as if we were really feeling with the stick, as though the stick itself had for the moment become a sensory extension, with feeling at the end, not in the handle.

Clark wants to say that this feeling is not an illusion, that the stick, and other extensions we may use, really are incorporated into us temporarily.  He quotes a dystopian vision of Bruce Sterling in which powerful machines are driven by increasingly feeble and senile humans; why instead shouldn’t the combination of feeble human and carefully designed robotics amount instead to a newly invigorated and intelligent person?

Among the most powerful extensions available for human beings is, of course, language. Words are often considered solely as a means of communication, but he makes a convincing case for the conclusion that they actually add massively to our cognitive abilities. He rightly says that the complexity generated by our ability to think about the way we think, or to think of new worlds from which other new worlds can be found, is staggering.

One of the ways our extended cognition helps us to extend ourselves further is by the construction of niches – re-engineering parts of the world to enhance our own performance. Clark here gives a fascinating account of how Elizabethan actors coped with the impossible load imposed on their memories – in those days it was apparently customary for a company to play six different plays a week with few repetitions and entirely new plays arriving at regular intervals.

Key factors were the standard layout of the stage and documents known as ‘plots’, a kind of high-level map of who comes in when and where and does what. These would enable actors to get a quick overall grasp of a new play, and with a script which gave only their own lines and using an over-learned grasp of the dramatic conventions of the day which gave them an in-built knowledge of the sort of thing that could be expected, they were be able to pick up plays and switch from one to another almost on the fly as they went. This is just a particularly striking example of the way we often make things easier for our brains by reducing the epistemic complexity of the environment.  Putting that piece of IKEA furniture together may be tough, but if we take out all the components and lay them out neatly the way they’re shown in the diagram, it actually becomes easier.

Clark draws a useful distinction, in many ways a key point, between three grades of embodiment. In mere embodiment, the body and sense are simply pieces of equipment with which things can be done; in basic embodiment their own feature and dynamics become resources in themselves which the entity in question can exploit. Finally, in the profound embodiment which humans and some other animals, especially primates, have developed, new parts of the world and new uses of parts of the body are constantly discovered and recruited to serve previously unimagined purposes. This capacity to pick up things and make them part of your own mental apparatus might be a defining feature of human cognition and human consciousness.

All this is fine, but we may ask whether it is enough to make us think cognitive extension is anything more than some fancy metaphors about plain old tool use. Clark now goes on to tackle some objections, first those made by Adams and Aizawa, who in a nutshell say that the kind of processes Clark are talking about are just not the right kind of thing to be considered cognitive.  Various grounds are advanced, but the strongest part of the negative case is perhaps the claim that cognitive processes are distinguished by processes involving non-derived representations. In other words, real minds deal in stuff which is inherently meaningful, and the kinds of things Clark is on about, if they deal in meaning at all, do so only because a real brain interprets them them as doing so. This claim clearly rebuts the example of Otto, used by Clark and Chalmers:  Otto, briefly, uses a notebook as a supplementary memory. It’s not memory, or memory-like, Adams and Aizawa would say, because the information in Otto’s brain is live, it means something all on its own, whereas his notebook only means something through the convention of writing and when brains write or read in it.

Clark suggests that meaningfulness, non-derived representations, can perhaps be found independently of brains in some cases. This is highly contentious territory where I think it might actually be more prudent not to go. There’s no need in any case, since he also has the argument that brains frequently use arbitrary, conventional symbols in internal cognitive activities – whenever we use words or numbers to think with. If those processes are cognitive, surely similar ones using external symbols deserve the same recognition. Is there any real difference between someone who visualises a map in memory in order to work out where to go, and someone who looks at the paper equivalent, such that we can dismiss one as not cognitive?

A different challenge comes from Robert Rupert, who proposes that embedded cognition is enough: we can say that cognitive processes are heavily dependent on external props and prompts without having to make the commitments required by extended cognition. The props and prompts may be essential, but that doesn’t mean we have to bring them within the pale of cognition itself – and in fact if we do we may risk losing sight of the thing we set out to study in the first place. If the mind includes all the teeming series of objects we use to help us think clearly, then the science of the mind is going to be more intractable than ever.

This latter objection is easily dealt with, since Clark is not concerned to deny that there is a recognisable, continuing core of cognition in the brain. On the other point, I’m not sure that choosing embedding over extension has any real advantage here: embedding might give us a cleanly delineated target for the study of the mind, but it explicitly does so by narrowing the scope of that study to exclude all the allegedly intractable swarm of prompts and props which help us out. But aren’t the props and prompts an interesting and essential area of study, even if we deny them the status of being truly cognitive? Clark seems to accept that there is no knock-out case for extension rather than embedding, but he claims a number of advantages for the former, notably that it steers us safely away from ‘magic dust’ and the seductive old idea of the inner homunculus, the little man in our head who controls everything.

In a final section, Clark considers two ‘Limits of Embodiment’. The first has to do with Noë’s argument that perception is active; that seeing is like painting. This Strong Sensorimotor model has many virtues in Clark’s eyes: in particular it moves attention away from the vexed issue of qualia and on to skills. In fact, it is associated with the claim that experience is just constituted by the activity of perception, which leaves no ambiguous ineffable inner reality to trouble us, dismissing the menacing hordes of philosophical zombies that qualophiles would unleash on us. The snag, in Clark’s eyes, is ‘hypersensitivity’ – if we take this line at face value we find ourselves saying that only identical people could have identical experiences, and therefore that no actual people see the same things.

The other limit has to do with an argument that if embodiment is true, functionalism must be false. It is a basic claim of functionalism, after all, that the substrate doesn’t matter; that in principle you could make a brain out of anything so long as it had the right functional properties at a high level. But embodiment says that the nature of the substrate is crucial – so if one is true the other must be false, right? Put as baldly as that, I think the weakness of the argument is apparent; why shouldn’t it be the case that the things which the theory of embodiment says are crucial are in fact, in the final analysis,  functional properties?

The book is a fascinating read, and full of stuff which this short sketch can’t really do any justice to.  I don’t know whether I’ve become a convert to mind extension, though. The problem for me at a philosophical level, is that none of this really worries me.  There are a number of  issues to do with consciousness and the mind where my inability to see the answers is, in a quiet but deep way, quite troubling to me. Cognitive extension,  somehow not so much.

I suppose part of the reason is that lurking at the back of my mind is the feeling that the location of the mind is a false issue. Where is the mind? In some Platonic realm, or zone of abstraction, or perhaps the question is meaningless. When we ask, is the mind within the brain or in the external world , the answer is in fact  ‘No’. Of course I understand that Clark is not operating at this naive level, but it leaves a lingering doubt somewhere at the back of my mind about how much the answer really matters , which is reinforced by the apparent softness of the issue. Clark himself seems to concede that it’s not exactly a question of absolute right and wrong, more of whether this is a useful and enlightening way to look at things. On that level, I certainly found his account convincing.

Perhaps I’ve ended up becoming one of those frivolous people who always want a theory to boggle their minds a bit.

Picture: heraldic whale. … it’s AGI now. I was interested to hear via Robots.net that Artificial General Intelligence had enjoyed a successful second conference recently.

In recent years there seems to have been a general trend in AI research towards more narrow and perhaps more realistic sets of goals; towards achieving particular skills and designing particular modules tied to specific tasks rather than confronting the grand problem of consciousness itself. The proponents of AGI feel that this has gone so far that the terms ‘artificial intelligence’ and AI no longer really designate the topic they’re interested in, the topic of real thinking machines.  ‘An AI’ these days is more likely to refer to the bits of code which direct the hostile goons in a first-person shooter game than to anything with aspirations to real awareness, or even real intelligence.

The mention of  ‘real intelligence’ of course, reminds us that plenty of other terms have been knocked out of shape over the years in this field. It is an old complaint from AI sceptics that roboteers keep grabbing items of psychological vocabulary and redefining them as something simpler and more computable. The claim that machines can learn, for example, remains controversial to some, who would insist that real learning involves understanding, while others don’t see how else you would describe the behaviour of a machine that gathers data and modifies its own behaviour as a result.

I think there is a kind of continuum here, from claims it seems hard to reject to those it seems bonkers to accept, rather like this…

Claim: machines… Objection
add numbers. Really the ‘numbers’ are a human interpretation of meaningless switching operations.
control factory machines. Control implies foresight and intentions whereas machines just follow a set of instructions.
play chess. Playing a game involves expectations and social interaction, which machines don’t really have.
hold conversations Chat-bots merely reshuffle set phrases to give the impression of understanding.
react emotionally There may be machines that display smiley faces or even operate in different ‘emotional’ modes, but none of that touches the real business of emotions.

Readers will probably find it easy to improve on this list, but you get the gist. Although there’s something in even the first objection, it seems pointless to me to deny that machines can do addition – and equally pointless to claim that any existing machine experiences emotions – although I don’t rule even that idea out of consideration forever.

I think the most natural reaction is to conclude that in all such cases, but especially in the middling ones, there are two different senses – there’s playing chess and really playing chess. What annoys the sceptics is their perception that AIers have often stolen terms for the easy computable sense when the normal reading is the difficult one laden with understanding, intentionality and affect.

But is this phenomenon not simply an example of the redefinition of terms which science has always introduced? We no longer call whales fish, because biologists decided it made sense to make fish and mammals exclusive categories – although people had been calling whales fish on and off for a long time before that. Aren’t the sceptics on this like diehard whalefishers? Hey, they say, you claimed to be elucidating the nature of fish, but all you’ve done is make it easy for yourself by making the word apply just to piscine fish, the easy ones to deal with. The difficult problem of elucidating the deeper fishiness remains untouched!

The analogy is debatable, but it could be claimed that redefinitions of  ‘intelligence’ and ‘learning’ have actually helped to clarify important distinctions in broadly the way that excluding the whales helped with biological taxonomy. However, I think it’s hard to deny that there has also at times been a certain dilution going on. This kind of thing is not unique to consciousness – look what happened to ‘virtual reality’, which started out as quite a demanding concept, and was soon being used as a marketing term for any program with slight pretensions to 3D graphics.

Anyway, given all that background it would be understandable if the sceptical camp took some pleasure in the idea that the AI people have finally been hoist with their own petard, and that just as the sceptics, over the years, have been forced to talk about ‘real intelligence’ and ‘human-level awareness’, the robot builders now have to talk about ‘artificial general intelligence’.

But you can’t help warming to people who want to take on the big challenge. It was the bold advent of the original AI project which really brought consciousness back on to the agenda of all the other disciplines, and the challenge of computer thought which injected a new burst of creative energy into the philosophy of mind, to take just one example. I think even the sceptics might tacitly feel that things would be a little quiet without the ‘rude mechanicals': if AGI means they’re back and spoiling for a fight, who could forbear to cheer?