Posts tagged ‘consciousness’

Alfred SmeeCharles Babbage was not the only Victorian to devise a thinking machine.

He is, of course, considered the father, or perhaps the grandfather, of digital computing. He devised two remarkable calculating machines; the Difference Engine was meant to produce error-free mathematical tables for navigation or other uses; the Analytical Engine, an extraordinary leap of the imagination, would have been the first true general-purpose computer. Although Babbage failed to complete the building of the first, and the second never got beyond the conceptual stage, his achievement is rightly regarded as a landmark, and the Analytical Engine routines published by Lady Lovelace in 1843 with a translation of Menabrea’s description of the Engine, have gained her recognition as the world’s first computer programmer.

The digital computer, alas, went no further until Turing a hundred years later; but in 1851 Alfred Smee published The Process of Thought adapted to Words and Language together with a description of the Relational and Differential Machines – two more designs for cognitive mechanisms.

Smee held the unusual post of Surgeon to the Bank of England – in practice he acted as a general scientific and technical adviser. His father had been Chief Accountant to the Bank and little Alfred had literally grown up in the Bank, living inside its City complex. Apparently, once the Bank’s doors had shut for the night, the family rarely went to the trouble of getting them unlocked again to venture out; it must have been a strangely cloistered life. Like Babbage and other Victorians involved in London’s lively intellectual life, Smee took an interest in a wide range of topics in science and engineering, with his work in electro-metallurgy leading to the invention of a successful battery; he was a leading ophthalmologist and among many other projects he also wrote a popular book about the garden he created in Wallington, south of London – perhaps compensating for his stonily citified childhood?

Smee was a Fellow of the Royal Society, as was Babbage, and the two men were certainly acquainted (Babbage, a sociable man, knew everyone anyway and was on friendly terms with all the leading scientists of the day; he even managed to get Darwin, who hated socialising and suffered persistent stomach problems, out to some of his parties). However, it doesn’t seem the two ever discussed computing, and Smee’s book never mentions Babbage.

That might be in part because Smee came at the idea of a robot mind from a different, biological angle. As a surgeon he was interested in the nervous system and was a proponent of ‘electro-biology’, advocating the modern view that the mind depends on the activity of the brain. At public lectures he exhibited his own ‘injections’ of the brain, revealing the complexity of its structure; but Golgi’s groundbreaking methods of staining neural tissue were still in the future, and Smee therefore knew nothing about neurons.

Smee nevertheless had a relatively modern conception of the nervous system. He conducted many experiments himself (he used so many stray cats that a local lady was moved to write him a letter warning him to keep clear of hers) and convinced himself that photovoltaic effects in the eye generated small currents which were transmitted bio-electrically along the nerves and processed in the brain. Activity in particular combinations of ‘nervous fibrils’ gave rise to awareness of particular objects. The gist is perhaps conveyed in the definition of consciousness he offered in an earlier work, Principles of the Human Mind Deuced from Physical Laws:

When an image is produced by an action upon the external senses, the actions on the organs of sense concur with the actions in the brain; and the image is then a Reality.
When an image occurs to the mind without a corresponding simultaneous action of the body, it is called a Thought.
The power to distinguish between a thought and a reality, is called Consciousness.

This is not very different in broad terms from a lot of current thinking.

In The Process of Thought Smee takes much of this for granted and moves on to consider how the brain deals with language. The key idea is that the brain encodes things into a pyramidal classification hierarchy. Smee begins with an analysis of grammar, faintly Chomskyan in spirit if not in content or level of innovation. He then moves on rapidly to the construction of words. If his pyramidal structure is symbolically populated with the alphabet different combinations of nervous activity will trigger different combinations of letters and so produce words and sentences. This seems to miss out some essential linguistic level, leaving the impression that all language is spelled out alphabetically, which can hardly be what Smee believed.

When not dealing specifically with language the letters in Smee’s system correspond to qualities and this pyramid stands for a universal categorisation of things in which any object can be represented as a combination of properties. (This rather recalls Bishop Wilkins’ proposed universal language, in which each successive letter of each noun identifies a position in an hierarchical classification, so that the name is an encoded description of the thing named.)

At least, I think that’s the way it works. The book goes on to give an account of induction, deduction, and the laws of thought; alas, Smee seems unaware of the problem described by Hume and does not address it. Instead, in essence, he just describes the processes; although he frames the discussion in terms of his pyramidal classification his account of induction (he suggests six different kinds) comes down to saying that if we observe two characteristics constantly together we assume a link. Why do we do that – and why is it we actually often don’t? Worse than that, he mentions simple arithmetic (one plus one equals two, two times two is four) and says:

These instances are so familiar we are apt to forget that they are inductions…

Alas, they’re not inductions. (You could arrive at them by induction, but no-one ever actually does and our belief in them does not rest on induction.)

I’m afraid Smee’s laws of thought also stand on a false premise; he says that the number of ideas denoted by symbols is finite, though too large for a man to comprehend. This is false. He might have been prompted to avoid the error if he had used numbers instead of letters for his pyramid – because each integer represents an idea; the list of integers goes on forever, yet our numbering system provides a unique symbol for every one? So neither the list of ideas nor the list of symbols can be finite. Of course that barely scratches the surface of the infinitude of ideas and symbols, but it helps suggest just how unmanageable a categorisation of every possible idea really is.

But now we come to the machines designed to implement these processes. Smee believed that his pyramidal structure could be implemented in a hinged physical mechanism where opening would mean the presence or existence of the entity or quality and closing would mean its absence. One of these structures provides the Relational Machine. It can test membership of categories, or the possession of a particular attribute, and can encode an assertion, allowing us to test consistency of that assertion with a new datum. I have to confess to having only a vague understanding of how this would really work. He allows for partial closing and I think the idea is that something like predicate calculus could be worked out this way. He says at one point that arithmetic could be done with this mechanism and that anyone who understands logarithms will readily see how; I’m afraid I can only guess what he had in mind.

It isn’t necessary to have two Relational Machines to deal with multiple assertions because we can take them in sequence; however the Differential Machine provides the capacity to compare directly, so that we can load into one side all the laws and principles that should guide a case while uploading the facts into the other.

Smee had a number of different ideas about how the machines could be implemented, and says he had a number of part-completed prototypes of partial examples on his workbench. Unlike Babbage’s designs, his were never meant to be capable of full realisation, though; although he thinks it is finite he says the Relational Machine would cover London and the mechanical stresses would destroy it immediately if it were ever used; moreover, using it to crank out elementary deductions would be so slow and tedious people would soon revert to using their wonderfully compact and efficient brains instead. But partial examples will helpfully illustrate the process of thought and help eliminate mistakes and ‘quibbles’. Later chapters of the book explore things that can go wrong in legal cases, and describe a lot of the quibbles Smee presumably hopes his work might banish.

I think part of the reason Smee’s account isn’t clearer (to me, anyway) is that his ideas were never critiqued by colleagues and he never got near enough to a working prototype to experience the practical issues sufficiently. He must have been a somewhat lonely innovator in his lab in the Bank and in fairness the general modernity of his outlook makes us forget how far ahead of his time he was. When he published his description of his machines, Wundt, generally regarded as the founder of scientific psychology, was still an undergraduate. To a first approximation, nobody knew anything about psychology or neurology. Logic was still essentially in the long Aristotelian twilight – and of course we know where computing stood. It is genuinely remarkable that Smee managed, over a hundred and fifty years ago, to achieve a proto-modern, if flawed, understanding of the brain and how it thinks. Optimists will think that shows how clever he was; pessimists will think it shows how little our basic conceptual thinking has been updated by the progress of cognitive science.

Newton in doubtConsciousness is not a problem, says Michael Graziano in an Atlantic piece that is short and combative. (Also, I’m afraid, pretty sketchy in places. Space constraints might be partly to blame for that, but can’t altogether excuse some sweeping assertions made with the broadest of brushes.)

Graziano begins by drawing an analogy with Newton and his theory of light. The earlier view, he says, was that white light was pure, and colour happened when it was ‘dirtied’ by contact with the surfaces of coloured objects. The detail of exactly how this happened was a metaphysical ‘hard problem’. Newton dismissed all that by showing first, that white light is in fact a mixture of all colours, and second, that our vision produces only an inaccurate and simplified model of the reality, with only three different colour receptors.

Consciousness itself, Graziano says, is also a misleading model in a somewhat similar way, generated when the brain represents its own activity to itself. In fact, to be clear, consciousness as represented doesn’t happen; it is a mistaken construct, the result of the good-enough but far from perfect apparatus bequeathed to us by evolution (this sounds sort of familiar).

We should be clear that it is really Hard Problem consciousness that is the target here, the consciousness of subjective experience and of qualia. Not that the other sort is OK: Graziano dismisses the Easy Problem kind of consciousness, more or less in passing, as being no problem at all…

These days it’s not hard to understand how the brain can process information about the world, how it can store and recall memories, how it can construct self knowledge including even very complex self knowledge about one’s personhood and mortality. That’s the content of consciousness, and it’s no longer a fundamental mystery. It’s information, and we know how to build computers that process information.

Amazingly, that’s it. Graziano writes in an impatient tone; I have to confess to a slight ruffling of my own patience here; memory is not hard to understand? I had the impression that there were quite a number of unimpeachably respectable scientists working on the neurology of memory, but maybe they’re just doing trivial detail, the equivalent of butterfly collecting, or who knows, philosophy? …we know how to build computers… You know it’s not the 1980s any more? Yet apparently there are still clever people who think you can just say that the brain is a computer and that’s not only straightforwardly true, but pretty much a full explanation? I mean, the brain is also meat, and we know how to build tools that process meat; shall we stop there and declare the rest to be useless metaphysics?

‘Information’, as we’ve often noted before, is a treacherous, ambiguous word. If we mean something akin to data, then yes, computers can handle it; if we mean something akin to understanding, they’re no better than meat cleavers. Nothing means anything to a computer, while human consciousness reads and attributes meanings with prodigal generosity, arguably as its most essential, characteristic activity. No computer was ever morally responsible for anything, while our society is built around the idea that human beings have responsibilities, rights, and property. Perhaps Graziano has debunking arguments for all this that he hasn’t leisure to tell us about; the idea that they are all null issues with nothing worthwhile to be said about them just doesn’t fly.

Anyway, perhaps I should keep calm because that’s not even what Graziano is mainly talking about. He is really after qualia, and in that area I have some moderate sympathy with him; I think it’s true that the problem of subjective experience is most often misconceived, and it is quite plausible that the limitations of our sensory apparatus and our colour vision in particular contribute to the confusion. There is a sophisticated argument to be made along these lines: unfortunately Graziano’s isn’t it; he merely dismisses the issue: our brain plays us false and that’s it. You could perhaps get away with that if the problem were simply about our belief that we have qualia; it could be that the sensory system is just misinforming us, the way it does in the case of optical illusions. But the core problem is about people’s actual direct experience of qualia. A belief can be wrong, but an experience is still an experience even if it’s a misleading one, and the existence of any kind of subjective experience is the real core of the matter. Yes, we can still deny there is any such thing, and some people do so quite cogently, but to say that what I’m having now is not an experience but the mere belief that I’m having an experience is hard and, well, you know, actually rather metaphysical…

On examination I don’t think Graziano’s analogy with Newton works well. It’s not clear to me why the ‘older’ view is to be characterised as metaphysical (or why that would mean it was worthless). Shorn of the emotive words about dirt, the view that white light picks up colour from contact with coloured things, the way white paper picks up colour from contact with coloured crayons, seems a reasonable enough scientific hypothesis to have started with. It was wrong, but if anything it seems simpler and less abstract than the correct view. Newton himself would not have recognised any clear line between science and philosophy, and in some respects he left the true nature of light a more complicated matter, not fully resolved. His choice of particles over waves has proved to be an over-simplification and remains the subject of some cloudy ontology to this day.

Worse yet, if you think about it, it was Newton who first separated the two realms: colour as it is in the world and colour as we experience it. This is the crucial distinction that opened up the problem of qualia, first recognisably stated by Locke, a fervent admirer of Newton, some years after Newton’s work. You could argue therefore, that if the subject of qualia is a mess, it is a mess introduced by Newton himself – and scientists shouldn’t castigate philosophers for trying to clear it up.

whistlePhysical determinism is implausible according to Richard Swinburne in the latest JCS; he cunningly attacks via epiphenomenalism.

Swinburne defines physical events as public and mental ones as private – we could argue about that, but as a bold, broad view it seems fair enough. Mental events may be phenomenal or intentional, but for current purposes the distinction isn’t important. Physical determinism is defined as the view that each physical event is caused solely by other physical events; here again we might quibble, but the idea seems basically OK to be going on with.

Epiphenomenalism, then, is the view that while physical events may cause mental ones, mental ones never cause physical ones. Mental events are just, as they say, the whistle on the locomotive (though the much-quoted analogy is not exact: prolonged blowing of the whistle on a steam locomotive can adversely affect pressure and performance). Swinburne rightly describes epiphenomenalism as an implausible view (in my view, anyway – many people would disagree), but for him it is entailed by physical determinism, because physical events are only ever caused by other physical events. In his eyes, then, if he can prove that epiphenomenalism is wrong, he has also shown that physical determinism is ruled out. This is an unusual, perhaps even idiosyncratic perspective, but not illogical.

Swinburne offers some reasonable views about scientific justification, but what it comes down to is this; to know that epiphenomenalism is true we have to show that mental events cause no physical events; but that very fact would mean we could never register when they had occurred – so how would we prove it? In order to prove epiphenomenalism true, we must assume that what it says is false!

Swinburne takes it that epiphenomenalism means we could never speak of our private mental events – because our words would have to have been caused by the mental events, and ex hypothesi they don’t cause physical events like speech. This isn’t clearly the case – as I’ve mentioned before, we manage to speak of imaginary and non-existent things which clearly have no causal powers. Intentionality – meaning – is weirder and more powerful than Swinburne supposes.

He goes on to discuss the famous findings of Benjamin Libet, which seem to show that decisions are detectable in the brain before we are aware of having made them. These results point towards epiphenomenalism being true after all. Swinburne is not impressed; he sees no basic causal problem in the idea that a brain event precedes the mental event of the decision, which in turn precedes action. Here he seems to me to miss the point a bit, which is that if Libet is right, the mental experience of making a decision has no actual effect, since the action is already determined.

The big problem, though is that Swinburne never engages with the normal view; ie that in one way or another mental events have two aspects. A single brain event is at the same time a physical event which is part of the standard physical story, and a mental event in another explanatory realm. In one way this is unproblematic; we know that a mass of molecules may also be a glob of biological structure, and an organism; we know that a pile of paper, a magnetised disc, or a reel of film may all also be “A Christmas Carol”. As Scrooge almost puts it, Marley’s ghost may be undigested gravy as well as a vision of the grave.

It would be useless to pretend there is no residual mystery about this, but it’s overwhelmingly how most people reconcile physical determinism with the mental world, so for Swinburne to ignore it is a serious weakness.

jennifer2Are ideas conscious at all? Neuroscience of Consciousness is a promising new journal from OUP introduced by the editor Anil Seth here. It has an interesting opinion piece from David Kemmerer which asks – are we ever aware of concepts, or is conscious experience restricted to sensory, motor and affective states?

On the face of it a rather strange question? According to Kemmerer there are basically two positions. The ‘liberal’ one says yes, we can be aware of concepts in pretty much the same kind of way we’re aware of anything. Just as there is a subjective experience when we see a red rose, there is another kind of subjective experience when we simply think of the concept of roses. There are qualia that relate to concepts just as there are qualia that relate to colours or smells, and there is something it is like to think of an idea. Kemmerer identifies an august history for this kind of thinking stretching back to Descartes.

The conservative position denies that concepts enter our awareness. While our behaviour may be influenced by concepts, they actually operate below the level of conscious experience. While we may have the strong impression that we are aware of concepts, this is really a mistake based on awareness of the relevant words, symbols, or images. The intellectual tradition behind this line of thought is apparently a little less stellar – Kemmerer can only push it back as far as Wundt – but it is the view he leans towards himself.

So far so good – an interesting philosophical/psychological issue. What’s special here is that in line with the new journal’s orientation Kemmerer is concerned with the neurological implications of the debate and looks for empirical evidence. This is an unexpected but surely commendable project.

To do it he addresses three particular theories. Representing the liberal side he looks at Global Neural Workspace Theory (GNWT) as set out by Dehaene, and Tononi’s Integrated information Theory (IIT)’ on the conservative side he picks the Attended Intermediate-Level Representation Theory (AIRT) of Prinz. He finds that none of the three is fully in harmony with the neurological evidence, but contends that the conservative view has distinct advantages.

Dehaene points to research that identified specific neurons in a subject’s anterior temporal lobes that fire when the subject is shown a picture of, say, Jennifer Aniston (mentioned on CE – rather vaguely). The same neuron fires when shown photographs, drawing, or other images, and even when the subject is reporting seeing a picture of Aniston. Surely then, the neuron in some sense represents not an image but the concept of Jennifer Aniston?  against theconservative view Kemmerer argues that while a concept may be at work, imagery is always present in the conscious mind; indeed, he contends,  you cannot think of ‘Anistonicity’ in itself without a particular image of Aniston coming to mind. Secondly he quotes further research which shows that deterioration of this portion of the brain impairs our ability to recognise, but not to see, faces. This, he contends, is good evidence that while these neurons are indeed dealing with general concepts at some level, they are contributing nothing to conscious awareness, reinforcing the idea that concepts operate outside awareness. According to Tononi we can be conscious of the idea of a triangle, but how can we think of a triangle without supposing it to be equilateral, isosceles, or scalene?

Turning to the conservative view, Kemmerer notes that AIRT has awareness at a middle level, between the jumble of impressions delivered by raw sensory input on the one hand, and the invariant concepts which appear at the high level. Conscious information must be accessible but need not always be accessed.  It is implemented as gamma vector waves. This is apparently easier to square with the empirical data than the global workspace, which implies that conscious attention would involve a shift into the processing system in the lateral prefrontal cortex where there is access to working memory – something not actually observed in practice. Unfortunately although the AIRT has a good deal of data on its side the observed gamma responses don’t in fact line up with reported experience in the way you would expect if it’s correct.

I think the discussion is slightly hampered by the way Kemmerer uses ‘awareness’ and ‘consciousness’ as synonyms. I’d be tempted to reserve awareness for what he is talking about, and allow that concepts could enter consciousness without our being (subjectively) aware of them. I do think there’s a third possibility being overlooked in his discussion – that concepts are indeed in our easy-problem consciousness while lacking the hard-problem qualia that go with phenomenal experience. Kemmerer alludes to this possibility at one point when he raises Ned Block’s distinction between access and phenomenal  (a- and p-consciousness), but doesn’t make much of it.

Whatever you think of Kemmerer’s ambivalent;y conservative conclusion, I think the way the paper seeks to create a bridge between the philosophical and the neurological is really welcome and, to a degree, surprisingly successful. If the new journal is going to give us more like that it will definitely be a publication to look forward to.

 

mind the gapA better neurophysiology, the answer to the Hard Problem? Kirchhoff and Hutto propose a slightly different way forward.

The Hard Problem, of course, is about reconciling the physical description of a conscious event with the way it feels from inside. This is the ‘explanatory gap’. Most of us these days are monists of one kind or another; we believe the world ultimately consists of one kind of thing, usually matter, without a second realm of spirits or other metaphysical entities on top. Some people would, accordingly seek to reduce the mental to the physical, perhaps even eliminating the mental so that our monism can be tidy ((I’m a messy monist myself). Neurophysiology, as formulated by Varela and briefly described in Kirchhoff and Hutto’s paper, does not look for a reduction, merely an explanation.
It does this by putting aside any idea of representations or computations; instead it proposes a practical research programme in which introspective reports of experience are matched with scans or other physical investigations. By elucidating the structure of both experience and physical event, the project aims to show how the two sides of experience constrain each other.

This, though, doesn’t seem enough for Kirchhoff and Hutto. Researching the two sides of the matter together is fine, but how will it ever show constraints, or generate an explanation? it seems it will be doomed to merely exhibiting correlation. Moreover, rather than resolving the explanatory gap, this approach seems to consolidate it.
These are reasonable objections, but I don’t think it’s quite as hopeless as that. The aspiration must surely be that the exploration comes together by exhibiting, not just correlation, but an underlying identity of structure? We might hope that the physical structure of the visual cortex tells us something about our colour space and visual experience that matches the structure of 0ur direct experience of colour, for example, in such a way that the mysterious quality of that experience is attenuated and eventually even dispelled. Other kinds of explanation might emerge. When I take off my glasses and look at the surface of brightly lit swimming pool, I see a host of white circles, all the same size and filled with the suggestion of a moire pattern, bobbing daintily about. In a pre-scientific era, this would have been hard to account for, but now I know it is entirely the result of some facts about the shape of my eyes and the lenses in them, and phenomenological worries don’t even get started. It could be that neurophilosophy can succeed in offering explanations good enough to remove the worries that currently exist. The great thing about it, of course, is that even if that hope is philosophically misplaced, elucidating the structure of experience from both ends is a very worthwhile project anyway, one that can surely only yield valuable new understanding.

However, what Kirchhoff and Hutto propose is that we go a little further and abolish the gap. Instead of affirming the separateness of the physical and the phenomenal, they suggest, we should recognise that they represent to different descriptions of a single thing.

That might seem a modest adjustment, but they also assert that the phenomenal character of experience actually arises not from the mere physics, but from the situation of that experience, taking place in an enactive, embodied context. So if we hold a book, we can see it; if we shut our eyes, we continue to feel it; but we also have a more complex engagement with it from our efforts to hold up what we know is a book, the feel of pages, and so on. There’s all sorts of stuff going on that isn’t the mere physical contact, and that’s what yields the character of the experience.

I see that, I think, but it’s a little odd. If we imagine floating in a sensory deprivation tank and gazing at a smooth, uniform red wall, we seem to be free of a lot of the context we’d normally have and on this view it’s a bit hard to see where the phenomenal intensity would be coming from (perhaps from the remembered significance of red?) We might suspect that Kirchhoff and Hutto are getting their phenomenal content smuggled in with the more complex phenomenal experience that they implicitly demand by requiring context, an illicit supplement that remains unexplained.

On this, why not let a thousand flowers grow; go ahead and develop explanations according to any exploratory project you prefer, and then we’ll have a look. Some of them might be good even if your underlying theory is wrong.
I think it is, incidentally. For me the explanatory gap is always misconstrued; the real gap is not between physics and phenomenology, it’s between theory and actuality, something that shouldn’t puzzle us, or at least not in the way it always does.

cakeThe Stanford Encyclopaedia of Philosophy is twenty years old. It gives me surprisingly warm feelings towards Stanford that this excellent free resource exists. It’s written by experts, continuously updated, and amazingly extensive. Long may it grow and flourish!

Writing an encyclopaedia is challenging, but an encyclopaedia of philosophy must take the biscuit. For a good encyclopaedia you need a robust analysis of the topics in the field so that they can be dealt with systematically, comprehensively, and proportionately. In philosophy there is never a consensus, even about how to frame the questions, never mind about what kind of answers might be useful. This must make it very difficult: do you try to cover the most popular schools of thought in an area? All the logically possible positions one might take up?  A purely historical survey? Or summarise what the landscape is really like, inevitably importing your own preconceptions?

I’ve seen people complain that the SEP is not very accessible to newcomers, and I think the problem is partly that the subject is so protean. If you read an article in the SEP, you’ll get a good view and some thought-provoking ideas; but what a noob looks for are a few pointers and landmarks. If I read a biography I want to know quickly about the subject’s  main works, their personal life, their situation in relation to other people in the field, the name of their theory or school, and so on.  Most SEP subject articles cannot give you this kind of standard information in relation to philosophical problems. There is a real chance that if you read up a SEP article and then go and talk to professionals, they won’t really get what you’re talking about. They’ll look at you blankly and then say something like:

“Oh, yes, I see where you’re coming from, but you know, I don’t really think of it that way…”

It’s not because the article you read was bad, it’s because everyone has a unique perspective on what the problem even is.

Let’s look at Consciousness. The content page has:

consciousness (Robert Van Gulick)

  • animal (Colin Allen and Michael Trestman)
  • higher-order theories (Peter Carruthers)
  • and intentionality (Charles Siewert)
  • representational theories of (William Lycan)
  • seventeenth-century theories of (Larry M. Jorgensen)
  • temporal (Barry Dainton)
  • unity of (Andrew Brook and Paul Raymont)

All interesting articles, but clearly not a systematic treatment based on a prior analysis. It looks more like the set of articles that just happened to get written with consciousness as part of the subject. Animal consciousness, but no robot consciousness? Temporal consciousness, but no qualia or phenomenal consciousness? But I’m probably looking in the wrong place.

In Robert Van Gulick’s main article we have something that looks much more like a decent shot at a comprehensive overview, but though he’s done a good job it won’t be a recognisable structure to anyone who hasn’t read this specific article. I really like the neat division into descriptive, explanatory, and functional questions; it’s quite helpful and illuminating: but you can’t rely on anyone recognising it (Next time you meet a professor of philosophy ask him: if we divide the problems of consciousness into three, and the first two are descriptive and explanatory, what would the third be? Maybe he’ll say  ‘Functional’, but maybe he’ll say ‘Reductive’ or something else – ‘Intentional’ or ‘Experiential’; I’m pretty sure he’ll need to think about it). Under ‘Concepts of Consciousness’ Van Gulick has ‘Creature Consciousness’: our noob would probably go away imagining that this is a well-known topic which can be mentioned in confident expectation of the implications being understood. Alas, no: I’ve read quite a few books about consciousness and can’t immediately call to mind any other substantial reference to ‘Creature Consciousness’: I’m pretty sure that unless you went on to explain that you were differentiating it from ‘State Consciousness’ and ‘Consciousness as an Entity’, you might be misunderstood.

None of this is meant as a criticism of the piece: Van Gulick has done a great job on most counts (the one thing I would really fault is that the influence of AI in reviving the topic and promoting functionalist views is, I think, seriously underplayed). If you read the piece you  will get about as good a view of the topic as that many words could give you, and if you’re new to it you will run across some stimulating ideas (and some that will strike you as ridiculous). But when you next read a paper on philosophy of mind, you’ll still have to work out from scratch how the problem is being interpreted. That’s just the way it is.

Does that mean philosophy of mind never gets anywhere? No, I really don’t think so, though it’s outstandingly hard to provide proof of progress. In science we hope to boil down all the hypotheses to a single correct theory: in philosophy perhaps we have to be happy that we now have more answers (and more problems) than ever before.

And the SEP has got most of them! Happy Birthday!

transferThe new film Self/less is based around the transfer of consciousness. An old man buys a new body to transfer into, and then finds that contrary to what he was told, it wasn’t grown specially: there was an original tenant who moreover, isn’t really gone. I understand that this is not a film that probes the metaphysics of the issue very deeply; it’s more about fight scenes; but the interesting thing is how readily we accept the idea of transferred consciousness.
In fact, it’s not at all a new idea; if memory serves, H.G.Wells wrote a short story on a similar theme; a fit young man with no family is approached by a rich old man in poor health who apparently wants to leave him all his fortune; then he finds himself transferred unwillingly to the collapsing ancient body and the old man making off in his fresh young one. In Wells’ version the twist is that the old man gets killed in a chance traffic accident, thereby dying before his old body does anyway.
The thing is, how could a transfer possibly work? In Wells’ story it’s apparently done with drugs, which is mysterious; more normally there’s some kind of brain-transfer helmet thing. It’s pretty much as though all you needed to do was run an EEG and then reverse the polarity. That makes no sense. I mean, scanning the brain in sufficient detail is mind-boggling to begin with, but the idea that you could then use much the same equipment to change the content of the mind is in another league of bogglement. Weather satellites record the meteorology of the world, but you cannot use them to reset it. This is why uploading your mind to a computer, while highly problematic, is far easier to entertain than transferring it to another biological body.
The big problem is that part of the content of the brain is, in effect, structural. It depends on which neurons are attached to which (and for that matter, which neurons there are), and on the strength and nature of that linkage. It’s true that neural activity is important too, and we can stimulate that electrically; even with induction gear that resembles the SF cliché: but we can’t actually restructure the brain that way.
The intuition that transfers should be possible perhaps rests on an idea that the brain is, as it were, basically hardware, and consciousness is essentially software; but isn’t really like that. You can’t run one person’s mind on another’s brain.
There is in fact no reason to suppose that there’s much of a read-across between brains: they may all be intractably unique. We know that there tends to be a similar regionalisation of functions in the brain, but there’s no guarantee that your neural encoding of ‘grandmother’ resembles mine or is similarly placed. Worse, it’s entirely possible that the ‘meaning’ of neural assemblages differs according to context and which other structures are connected, so that even if I could identify my ‘grandmother’ neurons, and graft them in in place of yours, they would have a different significance, or none.
Perhaps we need a more sophisticated and bespoke approach. First we thoroughly decipher both brains, and learn their own idiosyncratic encoding works. Then we work out a translator. This is a task of unimaginable complexity and particularity, but it’s not obviously impossible in principle. I think it’s likely that for each pair of brains you would need a unique translator: a universal one seems such an heroic aspiration that I really doubt its viability: a universal mind map would be an achievement of such interest and power that merely transferring minds would seem like time-wasting games by comparison.
I imagine that even once a translator had been achieved, it would normally only achieve partial success. There would be a limit to how far you could go with nano-bot microsurgery; and there might be certain inherent limitations. Certain ideas, certain memories, might just be impossible to accommodate in the new brain because of their incompatibility with structural or conceptual issues that were too deeply embedded to change; there would be certain limitations. The task you were undertaking would be like the job of turning Pride and Prejudice into Don Quixote simply by moving the words around and perhaps in a few key cases allowing yourself one or two anagrams: the result might be recognisable, but it wouldn’t be perfect. The transfer recipient would believe themselves to be Transferee, but they would have strange memory gaps and certain cognitive deficits, perhaps not unlike Alzheimer’s, as well as artefacts: little beliefs or tendencies that existed neither in Transferee or Recipient, but were generated incidentally through the process of reconstruction.
It’s a much more shadowy and unappealing picture, and it makes rather clearer the real killer: that though Recipient might come to resemble Transferee, they wouldn’t really be them.
In the end, we’re not data, or a program; we’re real and particular biological entities, and as such our ontology is radically different. I said above that the plausibility of transfers comes from thinking of consciousness as data, which I think is partly true: but perhaps there’s something else going on here; a very old mental habit of thinking of the soul as detachable and transferable. This might be another case where optimists about the capacity of IT are unconsciously much less materialist than they think.

whistleAn interesting paper in Behavioural and Brain Sciences from Morsella, Godwin, Jantz, Krieger, and Gazzaley, reported here: an accessible pdf draft version is here.

It’s quite a complex, thoughtful paper, but the gist is clearly that consciousness doesn’t really do that much. The authors take the view that many functions generally regarded as conscious are in fact automatic and pre- or un-conscious: what consciousness hosts is not the process but the results. It looks to consciousness as though it’s doing the work, but really it isn’t.

In itself this is not a new view, of course. We’ve heard of other theories that base their interpretation on the idea that consciousness only deals with small nuggets of information fed to it by unconscious processes. Indeed, as the authors acknowledge, some take the view that consciousness does nothing at all: that it is an epiphenomenon, a causal dead end, adding no more to human behaviour than the whistle adds to the locomotive.

Morsella et al don’t go that far. In their view we’re lacking a clear idea of the prime function of consciousness; their Passive Frame Theory holds that the function is to constrain and direct skeletal muscle output, thereby yielding adaptive behaviour. I’d have thought quite a lot of unconscious processes, even simple reflexes, could be said to do that too; philosophically I think we’d look for a bit more clarity about the characteristic ways in which consciousness as opposed to instinct or other unconscious urges influence behaviour; but perhaps I’m nit-picking.

The authors certainly offer explanation as to what consciousness does. In their view, well-developed packages are delivered to consciousness from various unconscious functions. In consciousness these form a kind of combinatorial jigsaw, very regularly refreshed in a conscious/unconscious cycle; the key thing is that these packages are encapsulated and cannot influence each other. This is what distinguishes the theory from the widely popular idea of a Global Workspace, originated by Bernard Baars; no work is done on the conscious contents while they’re there. they just sit until refreshed or replaced.

The idea of encapsulation is made plausible by various examples. When we recognise an illusion, we don’t stop experiencing it; when we choose not to eat, we don’t stop feeling hungry, and so on. It’s clearly the case that sometimes this happens, but can we say that there are really no cases where one input alters our conscious perception of another? I suspect that any examples we might come up with would be deemed by Morsella et al to occur in pre-conscious processing and only seem to happen in consciousness: the danger with that is that the authors might end up simply disqualifying all counter-examples and thereby rendering their thesis unfalsifiable. It would help if we could have a sharper criterion of what is, and isn’t, within consciousness.

As I say, the authors do hold that consciousness influences behaviour, though not by its own functioning; instead it does so by, in effect, feeding other unconscious functions. An analogy with the internet is offered: the net facilitates all kind of functions; auctions, research, social interaction, filling the world with cat pictures, and so on; but it would be quite right to say that in itself it does any of these things.

That’s OK, but it seems to delegate an awful lot of things that we might have regarded as conscious cognitive activity to these later unconscious functions, and it would help to have more of an account of how they do their thing and how consciousness contrives to facilitate. It seems it merely brings things together, but how does that help? If they’re side by side but otherwise unprocessed, I’m not sure what the value of merely juxtaposing them (in some sense which is itself a little unclear) amounts to.

So I think there’s more to do if Passive Frame Theory is going to be a success; but it’s an interesting start.

imageThe recent short NYT series on robots has a dying fall. The articles were framed as an investigation of how robots are poised to change our world, but the last piece is about the obsolescence of the Aibo, Sony’s robot dog. Once apparently poised to change our world, the Aibo is no longer made and now Sony will no longer supply spare parts, meaning the remaining machines will gradually cease to function.
There is perhaps a message here about the over-selling and under-performance of many ambitious AI projects, but the piece focuses instead on the emotional impact that the ‘death’ of the robot dogs will have on some fond users. The suggestion is that the relationship these owners have with their Aibo is as strong as the one you might have with a real dog. Real dogs die, of course, so though it may be sad, that’s nothing new. Perhaps the fact that the Aibos are ‘dying’ as the result of a corporate decision, and could in principle have been immortal makes it worse? Actually I don’t know why Sony or some third party entrepreneur doesn’t offer a program to virtualise your Aibo, uploading it into software where you can join it after the Singularity (I don’t think there would really be anything to upload, but hey…).
On the face of it, the idea of having a real emotional relationship with an Aibo is a little disturbing. Aibos are neat pieces of kit, designed to display ’emotional’ behaviour, but they are not that complex (many orders of magnitude less complex than a dog, surely), and I don’t think there is any suggestion that they have any real awareness or feelings (even if you think thermostats have vestigial consciousness, I don’t think an Aibo would score much higher. If people can have fully developed feelings for these machines, it strongly suggests that their feelings for real dogs have nothing to do with the dog’s actual mind. The relationship is essentially one-sided; the real dog provides engaging behaviour, but real empathy is entirely absent.
More alarming, it might be thought to imply that human relationships are basically the same. Our friends, our loved ones, provide stimuli which tickle us the right way; we enjoy a happy congruence of behaviour patterns, but there is no meeting of minds, no true understanding. What’s love got to do with it, indeed?
Perhaps we can hope that Aibo love is actually quite distinct from dog love. The people featured in the NYT video are Japanese, and it is often said that Japanese culture is less rigid about the distinction between animate and inanimate than western ideas. In Christianity, material things lack souls and any object that behaves as if it had one may be possessed or enchanted in ways that are likely to be unnatural and evil. In Shinto, the concept of kami extends to anything important or salient, so there is nothing unnatural or threatening about robots. But while that might validate the idea of an Aibo funeral, it does not precisely equate Aibos and real dogs.
In fact, some of the people in the video seem mainly interested in posing their Aibos for amusing pictures or video, something they could do just as well with deactivated puppets. Perhaps in reality Japanese culture is merely more relaxed about adults amusing themselves with toys?
Be that as it may, it seems that for now the era of robot dogs is already over…

stance 23Dan Dennett famously based his view of consciousness on the intentional stance. According to him the attribution of intentions and other conscious states is a most effective explanatory strategy when applied to human beings, but that doesn’t mean consciousness is a mysterious addition to physics. He compares the intentions we attribute to people with centres of gravity, which also help us work out how things will behave, but are clearly not a new set of real physical entities.

Whether you like that idea or not, it’s clear that the human brain is strongly predisposed towards attributing purposes and personality to things. Now a new study by Spunt, Meyer and Lieberman using FMRI provides evidence that even when the brain is ostensibly not doing anything, it is in effect ready to spot intentions.

This is based on findings that similar regions of the brain are active both in a rest state and when making intentional (but not non-intentional) judgements, and that activity in the pre-frontal cortex of the kind observed when the brain is at rest is also associated with greater ease and efficiency in making intentional attributions.

There’s always some element of doubt about how ambitious we can be in interpreting what FMRI results are telling us, and so far as I can see it’s possible in principle that if we had a more detailed picture than FMRI can provide we might see more significant differences between the rest state and the attribution of intentions; but the researchers cite evidence that supports the view that broad levels of activity are at least a significant indicator of general readiness.

You could say that this tells us less about intentionality and more about the default state of the human mind. Even when at rest, on this showing, the brain is sort of looking out for purposeful events. In a way this supports the idea that the brain is never naturally quiet, and explains why truly emptying the mind for purposes of deep meditation and contemplation might require deliberate preparation and even certain mental disciplines.

So far as consciousness itself is concerned, I think the findings lend more support to the idea that having ‘theory of mind’ is an essential part of having a mind: that is, that being able to understand the probable point of view and state of knowledge of other people is a key part of having full human-style consciousness yourself.

There’s obviously a bit of a danger of circularity there, and I’ve never been sure it’s a danger that Dennett for one escapes. I don’t know how you attribute intentions to people unless you already know what intentions are. The normal expectation would be that I can do that because I have direct knowledge of my own intentions, so all I need to do is hypothesise that someone is thinking the way I would think if I were in their shoes. In Dennett’s theory, me having intentions is really just more attribution (albeit self-attribution), so we need some other account of how it all gets started (apparently the answer is that we assume optimal intentions in the light of assumed goals).

Be that as it may, the idea that consciousness involves attributing conscious states to ourselves is one that has a wider appeal and it may shed a slightly different light on the new findings. It might be that the base activity identified by the study is not so much a readiness to attribute intentions, but a continuous second-order contemplation of our own intentions, and an essential part of normal consciousness. This wouldn’t mean the paper’s conclusions are wrong, but it would suggest that it’s consciousness itself that makes us more ready to attribute intentions.

Hard to test that one because unconscious patients would not make co-operative subjects…