Footnote Consciousness

Picture: lol. I see that while I was away the Internet has been getting a certain amount of stick over the way it allegedly alters our mental processes for the worse. Some of this dialogue apparently stems from a two-year-old piece by Nicholas Carr, now developed into a book.  Most of the criticisms seem to be from people who have experienced two main problems: they’re finding that they have a reduced attention span, and they’re also suffering from a failing memory.  They attribute these problems to Internet use – but I wonder whether they have made sufficient allowance for the fact that both can also be the result of simple ageing.

I think it’s true that if you don’t use your memory, it gets worse, so it’s superficially plausible that relying on the Internet could have a bad effect: but I don’t think I find myself using the Internet for things I would otherwise have learnt by heart, while I certainly have begun forgetting things I knew quite well before the Internet was invented.  So far as my attention span is concerned, it has certainly waned steadily over the whole course of my life: when I was four or five I could spend a long time just examining the patterns made by the grain in a piece of wood (mind you, in those days, we had interesting wood, not like the bland stuff they produce these days…).  I regret this to some extent, but in another way I don’t regret it at all, because I think it is partly a result of mental improvement, the result of accumulated experience. I can tell more quickly now when something is not going to be worth pursuing, and I am less bothered about dropping it quickly. Nowadays, I don’t feel at all guilty about dropping a book after chapter one if reading it looks like a mistake, whereas twenty years ago I would no more have stopped reading a book once started than I would have got up and left a dinner party half-way through. I know now that life is too short.

But there are deeper criticisms of the malign effects of the Internet.  Jaron Lanier, in an NYT piece (he too has written a book about it; it’s interesting that both he and Carr, in spite of the alleged waning of attention spans, still thought this quixotic ‘book’ business was still worthwhile, rather than just tweeting their thoughts), suggests that we are increasingly deferring to computers, encouraged by inflated claims made for various pieces of software. This has a serious moral dimension – if we see computers as people, we may be led to see people as mere machines – but it also undermines original creativity, a point developed more in the book (OK, I skimmed a few summaries). We start to value mashups and compilations as more valuable than new work generated from scratch. Perhaps worst of all, we may end up letting stupid algorithms make actual decisions for us.

The first of these points is one that has been made before, and I believe it underlies many people’s aversion to the whole idea of AI.  I think it’s undeniable that software producers are gravely inclined to overstate what their programs do, speaking of relatively simple data manipulation as though it involved genuine understanding and originality. But I don’t think that has really devalued our conception of humanity – not yet, anyway. Unless and until someone produces a machine which they claim is a conscious being, that remains a danger rather than a current problem. I don’t think we’re really in danger of delegating important decisions either; letting a computer suggest a track or a book is akin to random browsing of shelves; Lanier himself notes that even the advocates of the computers don’t allow the machines to design their products or run their companies.

There’s certainly something in the point about creativity. Hypertext encourages quotation, and I suspect that this has had an influence:  an apposite quote is a frequent and respected way of contributing to discussions on popular forums and blogs, to an extent that would seem almost donnish if the quotes weren’t typically from Star Wars or the Simpsons rather than Shakespeare. It must surely be the case that sometimes on the Web people use text quotes, photoshopped images and so on when otherwise they would have chosen their own words or drawn their own pictures; but mostly the copied stuff is surely extra. It’s a bit like photography; when people could take photographs, they made fewer engravings and oil paintings, but mainly they made many more pictures (and let’s be honest – some of those uncreated paintings and engravings were no loss).

There are deeper issues still: has the Web influenced the way we perceive the world? I strongly suspect that films have to some degree influenced the way I see the world and represent my own life to myself. I can’t be the only person who has sometimes felt an irresistible urge to do a reaction shot for the benefit of a non-existent audience (one day it may exist if CCTV continues to spread). In one way I think the Internet may have a more pervasive effect.  I remarked that the Internet is quotation-driven: but it doesn’t just quote, it comments. You could say, I think, that the essence of Web culture is to display something (text, picture, video) and provide comment in parallel (I suppose I’m exemplifying this as I describe it). I suspect that as time goes on reality will come to seem to us like the thing presented and our thoughts like the comments.  Our consciousness may end up seeming like a set of lengthy footnotes. Perhaps David Foster Wallace was way ahead of us.

Headless Consciousness

Picture: Alva Noë. There have been a number of attempts recently to externalise consciousness, or at least extend it beyond the skull. In Out of Our Heads, Alva Noë launches a very broad-based attack on the idea that it’s all about the brain, drawing in a wide range of interesting research – mostly relatively well-known stuff, but expounded in a style that is clear and very readable. Unfortunately I don’t find the arguments at all convincing; I’m not unsympathetic to extended-mind ideas, but Noë’s clear and thorough treatment tended if anything to remind me of reasons why the assumption that consciousness happens in the brain looks so attractive.

I’m happy to go along with Noë on some points: in his first chapter he  launches a bit of a side-swipe at scanning technology, fMRI and PET, pugnaciously asking whether it is ‘the new phrenology?’ and deriding its limitations: this seems a useful corrective to me. But in chapter two we are brought up short by the assertion that bacteria are agents.  They have interests, and pursue them, says Noë; they’re not just bags of chemicals responding to the presence of sugar.  Within their limits we ought to accord them some sort of agency.

To me, proper agency requires an awareness of what acts one is performing and the idea that bacteria could have it at any level seems absurd. How did we get here? Noë’s case seems to be that the problem of other minds is effectively insoluble on rational empirical grounds; we can never have really solid reasons for believing anyone else, or any other entity, is conscious; yet we find ourselves unable to entertain seriously the idea that our fellow-humans might be zombies. This, he thinks, is because we have a kind of built-in engagement, almost a kind of moral commitment. He wants to extend this to life fairly widely, and of course if brainless bacteria can have agency, it tends to show that the brain is a bit over-rated.  I think he’s unnecessarily pessimistic about the evidence for other conscious minds; as a matter of fact books like his are pretty spectacular evidence; how and why would human beings produce such volumes, examining the inner workings of consciousness in minute detail, if they didn’t have it? But bacteria have yet to produce such evidence in their own favour.

Noë rests a fair amount of weight on experiments which show the remarkable plasticity of the brain: notably he quotes experiments on ferrets by Mriganka Sur. New-born ferrets had their brains rewired in such a way that the eyes fed into auditory, rather than visual cortex; yet they grew up able to use their eyes perfectly well.  This shows, Noë suggests, that no particular part of the brain is required for vision. That might be so, but that in itself does not show that no brain at all will do the job, and obviously it won’t: if the ferrets’ optical nerves had been linked with their teeth, or left dangling unattached, they would surely have been unambiguously blind. The belief that consciousness is sustained by the brain does not commit us to the view that only one specific set of neurons can do the job.  Noë explains, quoting an experiment with a rubber hand, how our sense of our selves and where we are can be moved around in a remarkably vivid way. For him, this shows that where the brain is doesn’t matter; but for others it seems equally likely to suggest that what the brain thinks is crucial and where our real hand is doesn’t matter at all.

Noë wants to claim that the frequently quoted thought-experiment of a brain in a vat, extracted from the body but still living and thinking, is impossible in principle (we know it’s impossible in practice, at least currently). He suggests that even if we did manage to sustain a brain artificially, the supporting vat, providing it with oxygenated blood and all the other complex kinds of support it would need, would actually amount to a new body. This nifty bit of redefinition is meant to show that the idea of a brain without a body will not fly. But the real point here is surely missed.  OK, let’s accept that the vat is a new body: that still means we can swap bodies while maintaining an individual consciousness. But if we keep the body and swap brains… it just seems impossible to believe that the consciousness wouldn’t go with the brain.

This perception seems to me to be the unshiftable bedrock of the discussion. Noë expounds effectively the case for regarding tools and even parts of the environment as parts of our mental apparatus; and he brings in Putnam’s argument that ‘meaning isn’t in the head’. But these arguments only serve to expand our conception of the brain-based mind, not undermine it.

My sympathy for Noë’s case returned to some degree when he discussed language. He notes that for Chomsky and others language seemed a miraculous accomplishment because they misconstrued it as an exercise in the formal decoding of a vast array of symbols. In fact, language is an activity rather than a purely intellectual exercise, and develops in a context, pragmatically.  I’d go along with that to a great extent, but while Noë sees it as proving that our decoding brain isn’t the crux of the matter after all, it seems to me it proves that decoding isn’t really what our brains are doing when they process (a word Noë would object to) language.

I sympathised even more with Noë when he attacked the idea that reality is, essentially, an illusion. If this were the case, the brain would be the all-powerful arbiter of reality (although it might seem that if the world is an illusion, the brain must be one too, and we should be dealing with a mind whose actual nature need not be pinkish biological glop). But he seemed to be back on weak ground when he concluded by taking on dreams. Dreams, after all, seem like the perfect evidence that the brain can produce conscious experience without calling on the senses or the body.  Noë argues that dreams are more limited than we think, that not all waking experiences can be reproduced in dreams, which are always shifting and inconstant. This might be true, but so what? If the brain can produce conscious experiences on its own – any conscious experiences – that seems to show that, with all caveats duly entered, the brain is still where it really happens.

It’s a well-written book, and for someone new to consciousness it would provide many excellent short sketches of thought-provoking experiments and arguments. But I’m staying in my head.

Old skool consciousness

Picture: Hobbes and Descartes. There’s an illuminating new piece in the  SEP about 17th century theories of consciousness. (via) Your first reaction might be ‘what 17th century theories of consciousness?’; the discussion in those days was framed rather differently and it typically requires a degree of interpretation to work out what philosophers of the period actually thought about ‘consciousness’.  In fact, according to Larry M. Jorgensen, who wrote the entry, the 17th century saw the first emergence of the concept of consciousness as distinct from conscience: in many languages the same word is still used for both.

Hobbes apparently sets this out quite explicitly (somehow this interesting bit must have passed me by when I read Leviathan because it left no impression on my memory); he has conscience originally referring to something which two people knew about (‘knew together’), and then metaphorically for the knowledge of one’s own secret facts and secret thoughts. Jorgensen tells us that the Cambridge Platonists had a role in developing the modern usage in English where ‘conscience’ refers to knowledge of one’s own moral nature while ‘consciousness’ means simply knowledge of one’s own mental content.

That idea, of having knowledge of one’s own mental content, seems to have a reflexive element – we know about what we know; and this was an issue for philosophers of the period, notably Descartes. For Descartes it was essential that my having a thought involved me knowing that I had a thought; but for some this seemed to suggest a second-order theory in which a thought becomes conscious only when accompanied by another thought about the first.  Descartes could not accept this: for one thing if knowledge of my own thoughts is not direct, the cogito, Descartes’ most famous argument is threatened. The cogito claims that I cannot possibly be wrong about the fact that I am thinking, but if the knowledge of my thought is separate from the thought itself this no longer seems unassailably true.

It seems that while Descartes accepted that awareness of our own thought required some sort of reflection, he denied that the reflection was separate from the thought. He said that [T]he initial thought by means of which we become aware of something does not differ from the second thought by means of which we become aware that we were aware of it.

This can’t help but seem a little like cheating sneaking in an extra thought for nothing.  I think the best way to imagine it might be through analogy with a searchlight. We can swing the light around, illuminating here a building, there a tree, just as we can direct our conscious awareness towards different objects. Then Descartes might ask: do we need a second light in order to see the first light? No, of course not, because the light is already illuminated; if the light lights up other objects it must itself be illuminated (if perhaps in not quite the same way).

A surprising amount of Jorgensen’s exposition seems to be relevant to current discussions, and not solely because he is, necessarily, reinterpreting it in terms of modern concerns. In some ways I’m afraid we haven’t moved on all that much.