Posts tagged ‘consciousness’

upload angelA digital afterlife is likely to be available one day, according to Michael Graziano, albeit not for some time; his piece re-examines the possibility of uploading consciousness, and your own personality, into a computer. I think he does a good job of briefly sketching the formidable difficulties involved in scanning your brain, and scanning so precisely that your individual selfhood could be captured. In fact, he does it so well that I don’t really understand where his ultimate optimism comes from.

To my way of thinking, ‘scan and build’ isn’t even the most promising way of duplicating your brain. One more plausible way would be some kind of future bio-engineering where your brain just grows and divides, rather in the way that single cells do. A neater way would be some sort of hyper-path through space that split you along the fourth spatial dimension and returned both slices to our normal plane. Neither of these options is exactly a feasible working project, but to me they seem closer to being practical than a total scan. Of course neither of them offers the prospect of an afterlife the way scanning does, so they’re not really relevant for Graziano here. He seems to think we don’t need to go down to an atom by atom scan, but I’m not sure why not. Granted, the loss of one atom in the middle of my brain would not destroy my identity, but not scanning to an atomic level generally seems a scarily approximate and slapdash approach to me given the relevance of certain key molecules in the neural process –  something Graziano fully recognises.

If we’re talking about actual personal identity I don’t think it really matters though, because the objection I consider strongest applies even to perfect copies. In thought experiments we can do anything, so let’s just specify that by pure chance there’s another brain nearby that is in every minute detail the same as mine. It still isn’t me, for the banal commonsensical reason that copies are not the original. Leibniz’s Law tells us that if B has exactly the same properties as A, then it is A: but among the properties of a brain are its physical location, so a brain over there is not the same as one in my skull (so in fact I cheated by saying the second brain was the same in every detail but nevertheless ‘nearby’).

Now most philosophers would say that Leibniz is far too strong a criterion of identity when it comes to persons. There have been hundreds of years of discussion of personal identity, and people generally espouse much looser criteria for a person than they would for a stone – from identity of memories to various kinds of physical, functional, or psychological continuity. After all, people are constantly changing: I am not perfectly identical in physical terms to the person who was sitting here an hour ago, but I am still that person. Graziano evidently holds that personal identity must reside in functional or informational qualities of the kind that could well be transferred into a digital form, and he speaks disparagingly of ‘mystical’ theories that see problems with the transfer of consciousness. I don’t know about that; if anyone is hanging on to residual spiritual thinking, isn’t it the people who think we can be ‘taken out of’ our bodies and live forever? The least mystical stance is surely the one that says I am a physical object, and with some allowance for change and my complex properties, my identity works the same as that of any other physical object. I’m a one-off, particular thing and copies would just be copies.

What if we only want a twin, or a conscious being somewhat like me? That might still be an attractive option after all. OK, it’s not immortality but I think without being rampant egotists most of us probably feel the world could stand a few more people like ourselves around, and we might like to have a twin continuing our good work once we’re gone.

That less demanding goal changes things. If that’s all we’re going for, then yes, we don’t need to reproduce a real brain with atomic fidelity. We’re talking about a digital simulation, and as we know, simulations do not reproduce all the features of the thing being simulated – only those that are relevant for the current purpose. There is obviously some problem about saying what the relevant properties are when it comes to consciousness; but if passing the Turing Test is any kind of standard then delivering good outputs for conversational inputs is a fair guide and that looks like the kind of thing where informational and functional properties are very much to the fore.

The problem, I think, is again with particularity. Conscious experience is a one-off thing while data structures are abstract and generic. If I have a particular experience of a beautiful sunset, and then (thought experiments again) I have an entirely identical one a year later, they are not the same experience, even though the content is exactly the same. Data about a sunset, on the other hand, is the same data whenever I read or display it.

We said that a simulation needs to reproduce the relevant aspects of the the thing simulated; but in a brain simulation the processes are only represented symbolically, while one of the crucial aspects we need for real experience is particular reality.

Maybe though, we go one level further; instead of simulating the firing of neurons and the functional operation of the brain, we actually extract the program being run by those neurons and then transfer that. Here there are new difficulties; scanning the physical structure of the brain is one thing; working out its function and content is another thing altogether; we must not confuse information about the brain with the information in the brain. Also, of course, extracting the program assumes that the brain is running a program in the first place and not doing something altogether less scrutable and explicit.

Interestingly, Graziano goes on to touch on some practical issues; in particular he wonders how the resources to maintain all the servers are going to be found when we’re all living in computers. He suspects that as always, the rich might end up privileged.

This seems a strange failure of his technical optimism. Aren’t computers going to go on getting more powerful, and cheaper? Surely the machines of the twenty-second century will laugh at this kind of challenge (perhaps literally). If there is a capacity problem, moreover, we can all be made intermittent; if I get stopped for a thousand years and then resume, I won’t even notice. Chances are that my simulation will be able to run at blistering speed, far faster than real time, so I can probably experience a thousand years of life in a few computed minutes. If we get quantum computers, all of us will be able to have indefinitely long lives with no trouble at all, even if our simulated lives include having digital children or generating millions of digital alternates of ourselves, thereby adding to the population. Graziano, optimism kicking back in, suggests that we can grow  in understanding and come to see our fleshly life as a mere larval stage before we enter on our true existence. Maybe, or perhaps we’ll find that human minds, after ten billion years (maybe less) exhaust their potential and ultimately settle into a final state; in which case we can just get the computers to calculate that and then we’ll all be finalised, like solved problems. Won’t that be great?

I think that speculations of this kind eventually expose the contrast between the abstraction of data and the reality of an actual life, and dramatise the fact, perhaps regrettable, perhaps not, that you can’t translate one into the other.

 

bulbWhere do thoughts come from? Alva Noë provides a nice commentary here on an interesting paper by Melissa Ellamil et al. The paper reports on research into the origin of spontaneous thoughts.

The research used subjects trained in Mahasi Vipassana mindfulness techniques. They were asked to report the occurrence of thoughts during sessions when they were either left alone or provided with verbal stimuli. As well as reporting the occurrence of a thought, they were asked to categorise it as image, narrative, emotion or bodily sensation (seems a little restrictive to me – I can imagine having two at once or a thought that doesn’t fit any of the categories). At the same time brain activity was measured by fMRI scan.

Overall the study found many regions implicated in the generation of spontaneous thought; the researchers point to the hippocampus as a region of particular interest, but there were plenty of other areas involved. A common view is that when our attention is not actively engaged with tasks or challenges in the external world the brain operates the Default Mode Network (DMN); a set of neuronal areas which appear to produce detached thought (we touched on this a while ago); the new research complicates this picture somewhat or at least suggests that the DMN is not the unique source of spontaneous thoughts. Even when we’re disengaged from real events we may be engaged with the outside world via memory or in other ways.

Noë’s short commentary rightly points to the problem involved in using specially trained subjects. Normal subjects find it difficult to report their thoughts accurately; the Vipassana techniques provide practice in being aware of what’s going on in the mind, and this is meant to enhance the accuracy of the results. However, as Noë says, there’s no objective way to be sure that these reports are really more accurate. The trained subjects feel more confidence in their reports, but there’s no way to confirm that the confidence is justified. In fact we could go further and suggest that the special training they have undertaken may even make their experience particularly unrepresentative of most minds; it might be systematically changing their experience. These problems echo the methodological ones faced by early psychologists such as Wundt and Titchener with trained subjects. I suppose Ellamil et al might retort that mindfulness is unlikely to have changed the fundamental neural architecture of the brain and that their choice of subject most likely just provided greater consistency.

Where do ‘spontaneous’ thoughts come from? First we should be clear what we mean by a spontaneous thought. There are several kinds of thought we would probably want to exclude. Sometimes our thoughts are consciously directed; if for example we have set ourselves to solve a problem we may choose to follow a particular strategy or procedure. There are lots of different ways to do this, which I won’t attempt to explore in detail: we might hold different aspects of the problem in mind in sequence; if we’re making a plan we might work through imagined events; or we might even follow a formal procedure of some kind. We could argue that even in these cases what we usually control is the focus of attention, rather than the actual generation of thoughts, but it seems clear enough that this kind of thinking is not ‘spontaneous’ in the expected sense. It is interesting to note in passing that this ability to control our own thoughts implies an ability to divide our minds into controller and executor, or at least to quickly alternate those roles.

Also to be excluded are thoughts provoked directly by outside events. A match is struck in a dark theatre; everyone’s eyes saccade involuntarily to the point of light. Less automatically a whole variety of events can take hold of our attention and send our thoughts in a new direction. As well as purely external events, the sources in such cases might include interventions from non-mental parts of our own bodies; a pain in the foot, an empty stomach.

Third, we should exclude thoughts that are part of a coherent ongoing chain of conscious cogitation. These ‘normal’ thoughts are not being directed like our problem-solving efforts, but they follow a thread of relevance; by some connection one follows on from the next.

What we’re after then is thoughts that appear unbidden, unprompted, and with no perceivable connection with the thoughts that recently preceded them. Where do they come from? It could be that mere random neuronal noise sometimes generates new thoughts, but it seems unlikely to be a major contributor to me: such thoughts would be likely to resemble random nonsense and most of our spontaneous thought seem to make a little more sense than that.

We noticed above that when directing our thoughts we seem to be able to split ourselves into controller and controlled. As well as passing control up to a super-controller we sometimes pass it down, for example to the part of our mind that gets on with the details of driving along a route while the surface of our mind us engaged with other things. Clearly some part of our mind goes on thinking about which turnings to take; is it possible that one or more parts of our mind similarly goes on thinking about other topics but then at some trigger moment inserts a significant thought back into the main conscious stream? A ‘silent’ thinking part of us like this might be a permanent feature, a regular sub- or unconscious mind; or it might be that we occasionally drop threads of thought that descend out of the light of attention for a while but continue unheard before popping back up and terminating. We might perhaps have several such threads ruminating away in the background; ordinary conscious thought often seems rather multi-threaded. Perhaps we keep dreaming while awake and just don’t know it?

There’s a basic problem here in that our knowledge of these processes, and hence all our reports, rely on memory. We cannot report instantaneously; if we think a thought was spontaneous it’s because we don’t remember any relevant antecedents; but how can we exclude the possibility that we merely forgot them? I think this problem radically undermines our certainty about spontaneous thoughts. Things get worse when we remember the possibility that instead of two separate thought processes, we have one that alternates roles. Maybe when driving we do give conscious attention to all our decisions; but our mind switches back and forth between that and other matters that are more memorable; after the journey we find we have instantly forgotten all the boring stuff about navigating the route and are surprised that we seem to have done it thoughtlessly. Why should it not be the same with other thoughts? Perhaps we have a nagging worry about X which we keep spending a few moments’ thought on between episodes of more structured and memorable thought about something else; then everything but our final alarming conclusion about X gets forgotten and the conclusion seems to have popped out of nowhere.

We can’t, in short, be sure that we ever have any spontaneous thoughts: moreover, we can’t be sure that there are any subconscious thoughts. We can never tell the difference, from the inside, between a thought presented by our subconscious, and one we worked up entirely in intermittent and instantly-forgotten conscious mode. Perhaps whole areas of our thought never get connected to memory at all.

That does suggest that using fMRI was a good idea; if the problem is insoluble in first-person terms maybe we have to address it on a third-person basis. It’s likely that we might pick up some neuronal indications of switching if thought really alternated the way I’ve suggested. Likely but not guaranteed; after all a novel manages to switch back and forth between topics and points of view without moving to different pages. One thing is definitely clear; when Noë pointed out that this is more difficult than it may appear he was absolutely right.

flatlandersWrong again: just last week I was saying that Roger Penrose’s arguments seemed to have drifted off the radar a bit. Immediately, along comes this terrific post from Scott Aaronson about a discussion with Penrose.

In fact it’s not entirely about Penrose; Aaronson’s main aim was to present an interesting theory of his own as to why a computer can’t be conscious, which relies on non-copyability. He begins by suggesting that the onus is on those who think a computer can’t be conscious to show exactly why. He congratulates Penrose on doing this properly, in contrast to say, John Searle who merely offers hand-wavy stuff about unknown biological properties. I’m not really sure that Searle’s honest confession of ignorance isn’t better than Penrose’s implausible speculations about unknown quantum mechanics, but we’ll let that pass.

Aaronson rests his own case not on subjectivity and qualia but on identity. He mentions several examples where the limitless copyability of a program seems at odds with the strong sense of a unique identity we have of ourselves – including Star Trek style teleportation and the fact that a program exists in some Platonic sense forever, whereas we only have one particular existence. He notes that at the moment one of the main differences between brain and computer is our ability to download, amend and/or re-run programs exactly; we can’t do that at all with the brain. He therefore looks for reasons why brain states might be uncopyable. The question is, how much detail do we need before making a ‘good enough’ copy? If it turns out that we have to go down to the quantum level we run into the ‘no-cloning’ theorem; the price of transferring the quantum state of your brain is the destruction of the original. Aaronson makes a good case for the resulting view of our probably uniqueness being an intuitively comfortable one, in tune with our intuitions about our own nature. It also offers incidentally a sort of reconciliation between the Everett many-worlds view and the Copenhagen interpretation of quantum physics: from a God’s eye point of view we can see the world as branching, while from the point of view of any conscious entity (did I just accidentally call God unconscious?) the relevant measurements are irreversible and unrealised branches can be ‘lopped off’. Aaronson, incidentally, reports amusingly that Penrose absolutely accepts that the Everett view follows from our current understanding of quantum physics; he just regards that as a reductio ad absurdum – ie, the Everett view is so absurd the link proves there must be something wrong with our current understanding of quantum physics.

What about Penrose? According to Aaronson he now prefers to rest his case on evolutionary factors and downplay his logical argument based on Godel. That’s a shame in my view. The argument goes something like this (if I garble it someone will perhaps offer a better version).

First we set up a formal system for ourselves. We can just use the letters of the alphabet, normal numbers, and normal symbols of formal logic, with all the usual rules about how they can be put together. Then we make a list consisting of all the valid statements that can be made in this system. By ‘valid’, we don’t mean they’re true, just that they comply with the rules about how we put characters together (eg, if we use an opening bracket, there must be a closing one in an appropriate place). The list of valid statements will go on forever, of course, but we can put them in alphabetical order and number them. The list obviously includes everything that can be said in the system.

Some of the statements, by pure chance, will be proofs of other statements in the list. Equally, somewhere in our list will be statements that tell us that the list includes no proof of statement x. Somewhere else will be another statement – let’s call this the ‘key statement’ – that says this about itself. Instead of x, the number of that very statement itself appears. So this one says, there is no proof in this system of this statement.

Is the key statement correct – is there no proof of the key statement in the system? Well, we could look through the list, but as we know it goes on indefinitely; so if there really is no proof there we’d simply be looking forever. So we need to take a different tack. Could the key statement be false? Well, if it is false, then what it says is wrong, and there is a proof somewhere in the list. But that can’t be, because if there’s a proof of the key statement anywhere,the key statement must be true! Assuming the key statement is false leads us unavoidably to the conclusion that it is true, in the light of what it actually says. We cannot have a contradiction, so the key statement must be true.

So by looking at what the key statement says, we can establish that it is true; but we also establish that there is no proof of it in the list. If there is no proof in the list, there is no possible proof in our system, because we know that the list contains everything that can be said within our system; there is therefore a true statement in our system that is not provable within it. We have something that cannot be proved in an arbitrary formal system, but which human reasoning can show to be true; ergo, human reasoning is not operating within any such formal system. All computers work in a formal system, so it follows that human reasoning is not computational.

As Aaronson says, this argument was discussed to the point of exhaustion when it first came out, which is probably why Penrose prefers other arguments now. Aaronson rejects it, pointing out that he himself has no magic ability to see “from the outside” whether a given formal system is consistent; why should an AI do any better – he suggests Turing made a similar argument. Penrose apparently responded that this misses the point, which is not about a mystical ability to perceive consistency but the human ability to transcend any given formal system and move up to an expanded one.

I’ll leave that for readers to resolve to their own satisfaction. Let’s go back to Aaronson’s suggestion that the burden of proof lies on those who argue for the non-computability of consciousness. What an odd idea that is!  How would that play  at the Patent Office?

“So this is your consciousness machine, Mr A? It looks like a computer. How does it work?”

“All I’ll tell you is that it is a computer. Then it’s up to you to prove to me that it doesn’t work – otherwise you have to give me rights over consciousness! Bwah ha ha!”

Still, I’ll go along with it. What have I got? To begin with I would timidly offer my own argument that consciousness is really a massive development of recognition, and that recognition itself cannot be algorithmic.

Intuitively it seems clear to me that the recognition of linkages and underlying entities is what powers most of our thought processes. More formally, both of the main methods of reasoning rely on recognition; induction because it relies on recognising a real link (eg a causal link) between thing a and thing b; deduction because it reduces to the recognition of consistent truth values across certain formal transformations. But recognition itself cannot operate according to rules. In a program you just hand the computer the entities to be processed; in real world situations they have to be recognised. But if recognition used rules and rules relied on recognising the entities to which the rules applied, we’d be caught in a vicious circularity. It follows that this kind of recognition cannot be delivered by algorithms.

The more general case rests on, as it were, the non-universality of computation. It’s argued that computation can run any algorithm and deliver, to any required degree of accuracy, any set of physical states of affairs. The problem is that many significant kinds of states of affairs are not describable in purely physical or algorithmic terms. You cannot list the physical states of affairs that correspond to a project, a game, or a misunderstanding. You can fake it by generating only sets of states of affairs that are already known to correspond with examples of these things, but that approach misses the point. Consciousness absolutely depends on intentional states that can’t be properly specified except in intentional terms. That doesn’t contradict physics or even add to it the way new quantum mechanics might; it’s just that the important aspects of reality are not exhausted by physics or by computation.

The thing is, I think long exposure to programmable environments and interesting physical explanations for complex phenomena has turned us all increasingly into flatlanders who miss a dimension; who naturally suppose that one level of explanation is enough, or rather who naturally never even notice the possibility of other levels; but there are more things in heaven and earth than are dreamt of in that philosophy.

no botsI liked this account by Bobby Azarian of why digital computation can’t do consciousness. It has several virtues; it’s clear, identifies the right issues and is honest about what we don’t know (rather than passing off the author’s own speculations as the obvious truth or the emerging orthodoxy). Also, remarkably, I almost completely agree with it.

Azarian starts off well by suggesting that lack of intentionality is a key issue. Computers don’t have intentions and don’t deal in meanings, though some put up a good pretence in special conditions.  Azarian takes a Searlian line by relating the lack of intentionality to the maxim that you can’t get meaning-related semantics from mere rule-bound syntax. Shuffling digital data is all computers do, and that can never lead to semantics (or any other form of meaning or intentionality). He cites Searle’s celebrated Chinese Room argument (actually a thought experiment) in which a man given a set of rules that allow him to provide answers to questions in Chinese does not thereby come to understand Chinese. But, the argument goes, if the man, by following rules, cannot gain understanding, then a computer can’t either. Azarian mentions one of the objections Searle himself first named, the ‘systems response’: this says that the man doesn’t understand, but a system composed of him and his apparatus, does. Searle really only offered rhetoric against this objection, and in my view it is essentially correct. The answers the Chinese Room gives are not answers from the man, so why should his lack of understanding show anything?

Still, although I think the Chinese Room fails, I think the conclusion it was meant to establish – no semantics from syntax – turns out to be correct, so I’m still with Azarian. He moves on to make another  Searlian point; simulation is not duplication. Searle pointed out that nobody gets wet from digitally simulated rain, and hence simulating a brain on a computer should not be expected to produce consciousness. Azarian gives some good examples.

The underlying point here, I would say, is that a simulation always seeks to reproduce some properties of the thing simulated, and drops others which are not relevant for the purposes of the simulation. Simulations are selective and ontologically smaller than the thing simulated – which, by the way, is why Nick Bostrom’s idea of indefinitely nested world simulations doesn’t work. The same thing can however be simulated in different ways depending on what the simulation is for. If I get a computer to simulate me doing arithmetic by calculating, then I get the correct result. If it simulates me doing arithmetic by operating a humanoid writing random characters on a board with chalk, it doesn’t – although the latter kind of simulation might be best if I were putting on a play. It follows that Searle isn’t necessarily exactly right, even about the rain. If my rain simulation program turns on sprinklers at the right stage of a dramatic performance, then that kind of simulation will certainly make people wet.

Searle’s real point, of course, is really that the properties a computer has in itself, of running sets of rules, are not the relevant ones for consciousness, and Searle hypothesises that the required properties are biological ones we have yet to identify. This general view, endorsed by Azarian, is roughly correct, I think. But it’s still plausibly deniable. What kind of properties does a conscious mind need? Alright we don’t know, but might not information processing be relevant? It looks to a lot of people as if it might be, in which case that’s what we should need for consciousness in an effective brain simulator. And what properties does a digital computer, in itself have – the property of doing information processing? Booyah! So maybe we even need to look again at whether we can get semantics from syntax. Maybe in some sense semantic operations can underpin processes which transcend mere semantics?

Unless you accept Roger Penrose’s proof that human thinking is not algorithmic (it seems to have drifted off the radar in recent years) this means we’re still really left with a contest of intuitions, at least until we find out for sure what the magic missing ingredient for consciousness is. My intuitions are with Azarian, partly because the history of failure with strong AI looks to me very like a history of running up against the inadequacy of algorithms. But I reckon I can go further and say what the missing element is. The point is that consciousness is not computation, it’s recognition. Humans have taken recognition to a new level where we recognise not just items of food or danger, but general entities, concepts, processes, future contingencies, logical connections, and even philosophical ontologies. The process of moving from recognised entity to recognised entity by recognising the links between them is exactly the process of thought. But recognition, in us, does not work by comparing items with an existing list, as an algorithm might do; it works by throwing a mass of potential patterns at reality and seeing what sticks. Until something works, we can’t tell what are patterns at all; the locks create their own keys.

It follows that consciousness is not essentially computational (I still wonder whether computation might not subserve the process at some level). But now I’m doing what I praised Azarian for avoiding, and presenting my own speculations…

antInsects are conscious: in fact they were the first conscious entities. At least, Barron and Klein think so.  The gist of the argument, which draws on the theories of Bjorn Merker is based on the idea that subjective consciousness arises from certain brain systems that create a model of the organism in the world. The authors suggest that the key part of the invertebrate brain for these purposes is the midbrain; insects do not, in fact, have a direct structural analogue,, but the authors argue that they have others that evidently generate the same kind of unified model; it should therefore be presumed that they have consciousness.

Of course, it’s usually the cortex that gets credit for the ‘higher’ forms of cognition, and it does seem to be responsible for a lot of the fancier stuff. Barron and Klein however, argue that damage to the midbrain tends to be fatal to consciousness, while damage to the cortex can leave it impaired in content but essentially intact. They propose that the midbrain integrates two different sets of inputs; external sensory ones make their way down via the colliculus while internal messages about the state of the organism come up via the hypothalamus; nuclei in the middle bring them together in a model of the world around the organism which guides its behaviour. It’s that centralised model that produces subjective consciousness. Organisms that respond directly to stimuli in a decentralised way may still produce complex behaviour but they lack consciousness, as do those that centralise the processing but lack the required model.

Traditionally it has often been assumed that the insect nervous system is decentralised; but Barron and Klein say this view is outdated and they present evidence that although the structures are different, the central complex of the insect system integrates external and internal data, forming a model which is used to control behaviour in very much the same kind of process seen in vertebrates. This seems convincing enough to me; interestingly the recruitment of insects means that the nature of the argument changes into something more abstract and functional.

Does it work, though? Why would a model with this kind of functional property give rise to consciousness – and what kind of consciousness are we talking about? The authors make it clear that they are not concerned with reflective consciousness or any variety of higher-order consciousness, where we know that we know and are aware of our awareness. They say what they’re after is basic subjective consciousness and they speak of there being ‘something it is like’, the phrase used by Nagel which has come to define qualia, the subjective items of experience. However, Barron and Klein cannot be describing qualia-style consciousness. To see why, consider two of the thought-experiments defining qualia. Chalmers’s zombie twin is physically exactly like Chalmers, yet lacks qualia. Mary the colour scientist knows all the science about colour vision there could ever be, but she doesn’t know qualia. It follows rather strongly that no anatomical evidence can ever show whether or not any creature has qualia. If possession of a human brain doesn’t clinch the case for the zombie, broadly similar structures in other organisms can hardly do so; if science doesn’t tell Mary about qualia it can’t tell us either.

It seems possible that Barron and Klein are actually hunting a non-qualic kind of subjective consciousness, which would be a perfectly respectable project; but the fact that their consciousness arises out of a model which helps determine behaviour suggests to me that they are really in pursuit of what Ned Block characterised as access consciousness; the sort that actually gets decisions made rather than the sort that gives rise to ineffable feels.

It does make sense that a model might be essential to that; by setting up a model the brain has sort of created a world of its own, which sounds sort of like what consciousness does.
Is it enough though? Suppose we talk about robots for a moment; if we had a machine that created a basic model of the world and used it to govern its progress through the world, would we say it was conscious? I rather doubt it; such robots are not unknown and sometimes they are relatively simple. It might do no more than scan the position of some blocks and calculate a path between them; perhaps we should call that rudimentary consciousness, but it doesn’t seem persuasive.

Briefly, I suspect there is a missing ingredient. It may well be true that a unified model of the world is necessary for consciousness, but I doubt that it’s sufficient. My guess is that one or both of the following is also necessary: first, the right kind of complexity in the processing of the model; second, the right kind of relations between the model and the world – in particular, I’d suggest there has to be intentionality. Barron and Klein might contend that the kind of model they have in mind delivers that, or that another system can do so, but I think there are some important further things to be clarified before I welcome insects into the family of the conscious.

phantasyPeople who cannot form mental images? ‘Aphantasia’ is an extraordinary new discovery; Carl Zimmer and Adam Zeman seem between them to have uncovered a fascinating and previously unknown mental deficit (although there is a suggestion that Galton and others may have been aware of it earlier).

What is this aphantasia? In essence, no pictures in the head. Aphantasics cannot ‘see’ mental images of things that are not actually present in front of their eyes. Once the possibility received publicity Zimmer and Zeman began to hear from a stream of people who believe they have this condition. It seems people manage quite well with it and few had ever noticed anything wrong – there’s an interesting cri de coeur from one such sufferer here. Such people assume that talk of mental images is metaphorical or figurative and that others, like them, really only deal in colourless facts. It was the discovery of a man who had lost the visualising ability through injury that first brought it to notice: a minority of people who read about his problem thought it was more remarkable that he had ever been able to form mental images than that he now could not.

Some caution is surely in order. When a new disease or disability comes along there are usually people who sincerely convince themselves that they are sufferers without really having the condition. Some might be mistaken. Moreover, the phenomenology of vision has never been adequately clarified, and I strongly suspect it is more complex than we realise. There are, I think, several different senses in which you can form a mental image; those images may vary in how visually explicit they are, and it could well be that not all aphantasics are suffering the same deficits.

However that may be, it seems truly remarkable that such a significant problem could have passed unnoticed for so long. Spatial visualisation is hardly a recondite capacity; it is often subject to testing. One kind of widely used test presents the subject with a drawing of a 3D shape and a selection of others that resemble it. One is a perfect rotated copy of the original shape, and subjects are asked to pick it out. There is very good evidence that people solve these problems by mentally rotating an image of the target shape; shapes rotated 180 degrees regularly take twice as long to spot as ones that have been rotated 90; moreover the speed of mental rotation appears to be surprisingly constant between subjects. How do aphantasics cope with these tests at all? One would think that the presence of a significantly handicapped minority would have become unmissably evident by now.

One extraordinary possibility, I think, is that aphantasia is in reality a kind of mental blindsight. Subjects with blindsight are genuinely unable to see things consciously, but respond to visual tasks with a success rate far better than chance. It seems that while they can’t see consciously, by some other route their unconscious mind still can. It seems tantalisingly possible to me that aphantasics have an equivalent problem with mental images; they do form mental images but are never aware of them. Some might feel that suggestion is nonsensical; doesn’t the very idea of a mental image imply its presence in consciousness? Well, perhaps not: perhaps our subconscious has a much more developed phenomenal life than we have so far realised?

At any rate, expect to hear much more about this…

Red Green circle AnimationSmooth or chunky? Like peanut butter, experience could have different granularities; in practice it seems the answer might be ‘both’. Herzog, Kammer and Scharnowski here propose a novel two-level model in which initial processing is done on a regular stream of fine-grained percepts. Here things get ‘labelled’ with initial colours, durations, and so on, but relatively little of this processing ever becomes conscious. Instead the results lurch into conscious awareness in irregular chunks of up to 400 milliseconds in duration. The result is nevertheless an apparently smooth and seamless flow of experience – the processing edits everything into coherence.

Why adopt such a complex model? What’s wrong with just supposing that percepts roll straight from the senses into the mind, in a continuous sequence? That is after all how things look. The two-level system is designed to resolve a conflict between two clear findings. On the one hand we do have quite fine-grained perception; we can certainly be aware of things that are much shorter than 400ms in duration. On the other, certain interesting effects very strongly suggest that some experiences only enter consciousness after 400ms.

If for example, we display a red circle and then a green one a short distance away, with a delay of 400ms, we do not experience two separate circles, but one that moves and changes colour. In the middle of the move the colour suddenly switches between red and green (see the animation – does that work for you?). But our brain could not have known the colour of the second circle until after it appeared, and so it could not have known half-way through that the circle needed to change. The experience can only have been fed to consciousness after the 400ms was up.

A comparable result is obtained with the intermittent presentation of verniers. These are pairs of lines offset laterally to the right or left. If two different verniers are rapidly alternated, we don’t see both, but a combined version is which the offset is the average of those in the two separate verniers. This effect persists for alternations up to 400ms. Again, since the brain cannot know the second offset until it has appeared, it cannot know what average version to present half-way through; ergo, the experience only becomes conscious after a delay of 400ms.

It seems that even verbal experience works the same way, with a word at the end of a sentence able to smoothly condition our understanding of an ambiguous word (‘mouse’ – rodent or computer peripheral?) if the delay is within 400ms; and there are other examples.

Curiously, the authors make no reference to the famous finding of Libet that our awareness of a decision occurs up to 500ms after it is really made. Libet’s research was about internal perception rather than percepts of external reality, but the similarity of the delay seems striking and surely strengthens the case for the two-level model; it also helps to suggest that we are dealing with an effect which arises from the construction of consciousness, not from the sensory organs or very early processes in the retina or elsewhere.

In general I think the case for a two-level process of some kind is clear and strong, and we’ll set out here. We may reasonably be a little more doubtful about the details of the suggested labelling process; at one point the authors refer to percepts being assigned ‘numbers’; hang on to those quote marks would be my advice.
The authors are quite open about their uncertainty around consciousness itself. They think that the products of initial processing may enter consciousness when they arrive at attractor states, but the details of why and how are not really clear; nor is it clear whether we should think of the products being passed to consciousness (or relabelled as conscious?) when they hit attractor states or becoming conscious simply by virtue of being in an attractor state. We might go so far as to suppose that the second level, consciousness, has no actual location or consistent physical equivalent, merely being the sum of all resolved perceptual states in the brain at any one time.

That points to the wider issue of the Frame Problem, which the paper implicitly raises but does not quite tackle head on. The brain gets fed a very variable set of sensory inputs and manages to craft a beautifully smooth experience out of them (mostly); it looks as if an important part of this must be taking place in the first level processing, but it is a non-trivial task which goes a long way beyond interpolating colours and positions.

The authors do mention the Abhidharma Buddhist view of experience as a series of discrete moments within a flow; we’ve touched on this before in discussions of findings by Varea and others that the flow of consciousness seems to have a regular pulse; it would be intriguing and satisfactory if that pulse could be related to the first level of processing hypothesised here; we’re apparently talking about something in the 100ms range which seems a little on the long side for the time slices proposed; but perhaps a kind of synthesis is possible..?

MachiavelliWhy are we evil? This short piece  asks how the “Dark Tetrad” of behaviours could have evolved.

The Dark Tetrad is an extended version of the Dark Triad of three negative personality traits/behaviours (test yourself here  – I scored ‘infrequently vile’). The original three are ‘Machiavellianism’ – selfishly deceptive, manipulative behaviour; Psychopathy – indifference or failure to perceive the feelings of others; and Narcissism – vain self-obsession. Clearly there’s some overlap and it may not seem clear that these are anything but minor variants on selfishness, but research does suggest that they are distinct. Machiavellians, for example do not over-rate themselves and don’t need to be admired; narcissists aren’t necessarily liars or deceivers; psychopaths are manipulative but don’t really get people.

These three traits account for a good deal of bad behaviour, but it has been suggested that they don’t explain everything; we also need a fourth kind of behaviour, and the leading candidate is ‘Everyday Sadism‘ ; simple pleasure in the suffering of others, regardless of whether it brings any other advantage for oneself. Whether this is ultimately the correct analysis of ‘evil’ behaviour or not, all four types are readily observable in varying degrees. Socially they are all negative, so how could they have evolved?

There doesn’t seem to me to be much mystery about why ‘Machiavellian’ behaviour would evolve (I should acknowledge at this point that using Machiavelli as a synonym for manipulativeness actually understates the subtlety and complexity of his philosophy). Deceiving others in one’s own interests has obvious advantages which are only negated if one is caught. Most of us practice some mild cunning now and then, and the same sort of behaviour is observable in animals, notably our cousins the chimps.

Psychopathy is a little more surprising. Understanding other people, often referred to as ‘theory of mind’ is a key human achievement, though it seems to be shared by some other animals to a degree. However, psychopaths are not left puzzled by their fellow human beings; it’s more that they lack empathy and see others as simply machines whose buttons can freely be pushed. This can be a successful attitude and we are told that somewhat psychopathic traits are commonly found in the successful leaders of large organisations. That raises the question of why we aren’t all psychopaths; my guess is that psycopathic behaviour pays off best in a society where most people are normal; if the proportion grows above a certain small level, the damage done by competition between psychopaths starts to outweigh the benefits and the numbers adjust.

Narcissism is puzzling because narcissists are less self-sufficient than the rest of us and also have deluded ideas about what they can accomplish; neither of these are positive traits in evolutionary terms. One positive side is that narcissists expect a lot from themselves and in the right circumstances they will work hard and behave well in order to protect their own self-image. It may be that in the right context these tendencies win esteem and occasional conspicuous success, and that this offsets the disadvantages.

Finally, sadism. It’s hard to see what benefits accrue to anyone from simply causing pain, detached from any material advantage. Sadism clearly requires theory of mind – if you didn’t realise other people were suffering, there would be no point in hurting them. It’s difficult to know whether there are genuine animal examples. Cats seem to torture mice they have caught, letting them go and instantly catching them again, but to me the behaviour seems automatic or curious, not motivated by any idea that the mice experience pain. Similarly in other cases it generally seems possible to find an alternative motivation.

What evolutionary advantage could sadism confer? Perhaps it makes you more frightening to rivals – but it may also make and motivate enemies. I think in this case we must assume that rather than being a trait with some downsides but some compensating value it is a negative feature that just comes along unavoidably with a large free-running brain. The benefit of consciousness is that it takes us out of the matrix of instinctive and inherited patterns of behaviour and allows detached thought and completely novel responses. In a way Nature took a gamble with consciousness, like a good manager recognising that the good staff might do better if left without specific instructions. On the whole, the bet has paid off handsomely, but it means that the chance of strange and unfavourable behaviour in some cases or on some occasions just has to be accepted. I the case of everyday sadism, the sophisticated theory of mind which human beings have is put to distorted and unhelpful use.

Maybe then, sadism is the most uniquely human kind of evil?

boxers…for two theories?

Ihtio kindly drew my attention to an interesting paper which sets integrated information theory (IIT) against its own preferred set of ideas – semantic pointer competition (SPC). I’m not quite sure where this ‘one on one’ approach to theoretical discussion comes from. Perhaps the authors see IIT as gaining ground to the extent that any other theory must now take it on directly. The effect is rather of a single bout from some giant knock-out tournament of theories of consciousness (I would totally go for that, incidentally; set it up, somebody!).

We sort of know about IIT by now, but what is SPC? The authors of the paper, Paul Thagard and Terrence C Stewart, suggest that:

consciousness is a neural process resulting from three mechanisms: representation by firing patterns in neural populations, binding of representations into more complex representations called semantic pointers, and competition among semantic pointers to capture the most important aspects of an organism’s current state.

I like the sound of this, and from the start it looks like a contender. My main problem with IIT is that, as was suggested last time, it seems easy enough to imagine that a whole lot of information could be integrated but remain uniluminated by consciousness; it feels as if there needs to be some other functional element; but if we supply that element it looks as if it will end up doing most of the interesting work and relegate the integration process to something secondary or even less important. SPC looks to be foregrounding the kind of process we really need.

The authors provide three basic hypotheses on which SPC rests;

H1. Consciousness is a brain process resulting from neural mechanisms.
H2. The crucial mechanisms for consciousness are: representation by patterns of firing in neural populations, binding of these representations into semantic pointers, and competition among semantic pointers.
H3. Qualitative experiences result from the competition won by semantic pointers that unpack into neural representations of sensory, motor, emotional, and verbal activity.

The particular mention of the brain in H1 is no accident. The authors stress that they are offering a theory of how brains work. Perhaps one day we’ll find aliens or robots who manage some form of consciousness without needing brains, but for now we’re just doing the stuff we know about. “…a theory of consciousness should not be expected to apply to all possible conscious entities.”

Well, actually, I’d sort of like it to – otherwise it raises questions about whether it really is consciousness itself we’re explaining. The real point here, I think, is meant to be a criticism of IIT, namely that it is so entirely substrate-neutral that it happily assigns consciousness to anything that is sufficiently filled with integrated information. Thagard and Stewart want to distance themselves from that, claiming it as a merit of their theory that it only offers consciousness to brains. I sympathise with that to a degree, but if it were me I’d take a slightly different line, resting on the actual functional features they describe rather than simple braininess. The substrate does have to be capable of doing certain things, but there’s no need to assume that only neurons could conceivably do them.

The idea of binding representations into ‘semantic pointers’ is intriguing and seems like the right kind of way to be going; what bothers me most here is how we get the representations in the first place. Not much attention is given to this in the current paper: Thagard and Stewart say neurons that interact with the world and with each other become “tuned” to regularities in the environment. That’s OK, but not really enough. It can’t be that mere interaction is enough, or everything would be a prolific representation of everything around it; but picking out the right “regularities” is a non-trivial task, arguably the real essence of representation.

Competition is the way particular pointers get selected to enter consciousness, according to H2; I’m not exactly sure how that works and I have doubts about whether open competition will do the job. One remarkable thing about consciousness is its coherence and direction, and unregulated competition seems unlikely to produce that, any more than a crowd of people struggling for access to a microphone would produce a fluent monologue. We can imagine that a requirement for coherence is built in, but the mechanism that judges coherence turns out to be rather important and rather difficult to explain.

So does SPC deliver? H3 claims that it gives rise to qualitative experience: the paper splits the issue into two questions: first, why are there all these different experiences, and second, why is there any experience at all? On the first, the answers are fairly good, but not particularly novel or surprising; a diverse range of sensory inputs and patterns of neural firing naturally give rise to a diversity of experience. On the second question, the real Hard Problem, we don’t really get anywhere; it’s suggested that actual experience is an emergent property of the three processes of consciousness. Maybe it is, but that doesn’t really explain it. I can’t seriously criticise Thagard and Stewart because no-one has really done any better with this; but I don’t see that SPC has a particular edge over IIT in this respect either.

Not that their claim to superiority rests on qualia; in fact they bring a range of arguments to suggest that SPC is better at explaining various normal features of consciousness. These vary in strength, in my opinion. First feature up is  how consciousness starts and stops. SPC has a good account, but I think IIT could do a reasonable job, too. The second feature is how consciousness shifts, and this seems a far stronger case; pointers naturally lend themselves better to thus than the gradual shifts you would at first sight expect from a mass of integrated information. Next we have a claim that SPC is better at explaining the different kinds or grades of consciousness that fifteen organisms presumably have. I suppose the natural assumption, given IIT, would be that you either have enough integration for consciousness or you don’t. Finally, it’s claimed that SPC is the winner when it comes to explaining the curious unity/disunity of consciousness. Clearly SPC has some built-in tools for binding, and the authors suggest that competition provides a natural source of fragmentation. They contrast this with Tononi’s concept of quantity of consciousness, an idea they disparage as meaningless in the face of the mental diversity of the organisms in the world.

As I say, I find some of these points stronger than others, but on the whole I think the broad claim that SPC gives a better picture is well founded. To me it seems the advantages of SPC mainly flow from putting representation and pointers at the centre. The dynamic quality this provides, and the spark of intentionality, make it better equipped to explain mental functions than the more austere apparatus of IIT. To me SPC is like a vehicle that needs overhauling and some additional components (some of those not readily available); it doesn’t run just now but you can sort of see how it would. IIT is more like an elegant sculptural form which doesn’t seem to have a place for the wheels.

observationAre we being watched? Over at Aeon, George Musser asks whether some AI could quietly become conscious without our realising it. After all, it might not feel the need to stop whatever it was doing and announce itself. If it thought about the matter at all, it might think it was prudent to remain unobserved. It might have another form of consciousness, not readily recognisable to us. For that matter, we might not be readily recognisable to it, so that perhaps it would seem to itself to be alone in a solipsistic universe, with no need to try to communicate with anyone.

There have been various scenarios about this kind of thing in the past which I think we can dismiss without too much hesitation. I don’t think the internet is going to attain self-awareness because however complex it may become, its simply isn’t organised in the right kind of way. I don’t think any conventional digital computer is going to become conscious either, for similar reasons.

I think consciousness is basically an expanded development of the faculty of recognition. Animals have gradually evolved the ability to recognises very complex extended stimuli; in the case of human beings things have gone a massive leap further so that we can recognise abstractions and generalities. This makes a qualitative change because we are no longer reliant on what is coming in through our sense from the immediate environment; we can think about anything, even imaginary or nonsensical things.

I think this kind of recognition has an open-ended quality which means it can’t be directly written into a functional system; you can’t just code it up or design the mechanism. So no machines have been really good candidates; until recently. These days I think some AI systems are moving into a space where they learn for themselves in a way which may be supported by their form and the algorithms that back them up, but which does have some of the open-ended qualities of real cognition. My perception is that we’re still a long way from any artificial entity growing into consciousness; but it’s no longer a possibility which can be dismissed without consideration; so a good time for George to be asking the question.

How would it happen? I think we have to imagine that a very advanced AI system has been set to deal with a very complex problem. The system begins to evolve approaches which yield results and it turns out that conscious thought – the kind of detachment from immediate inputs I referred to above – is essential. Bit by bit (ha) the system moves towards it.

I would not absolutely rule out something like that; but I think it is extremely unlikely that the researchers would fail to notice what was happening.

First, I doubt whether there can be forms of consciousness which are unrecognisable to us. If I’m right consciousness is a kind of function which yields purposeful output behaviour, and purposefulness implies intelligibility. We would just be able to see what it was up to. Some qualifications to this conclusion are necessary. We’ve already had chess AIs that play certain end-games in ways that don’t make much sense to human observers, even chess masters, and look like random flailing. We might get some patterns of behaviour like that. But the chess ‘flailing’ leads reliably to mate, which ultimately is surely noticeable. Another point to bear in mind is that our consciousness was shaped by evolution, and by the competition for food, safety, and reproduction. The supposed AI would have evolved in consciousness in response to completely different imperatives, which might well make some qualitative difference. The thoughts of the AI might not look quite like human cognition.  Nevertheless I still think the intentionality of the AI’s outputs could not help but be recognisable. In fact the researchers who set the thing up would presumably have the advantage of knowing the goals which had been set.

Second, we are really strongly predisposed to recognising minds. Meaningless whistling of the wind sounds like voices to us; random marks look like faces; anything that is vaguely humanoid in form or moves around like a sentient creature is quickly anthropomorphised by us and awarded an imaginary personality. We are far more likely to attribute personhood to a dumb robot than dumbness to one with true consciousness. So I don’t think it is particularly likely that a conscious entity could evolve without our knowing it and keep a covert, wary eye on us. It’s much more likely to be the other way around: that the new consciousness doesn’t notice us at first.

I still think in practice that that’s a long way off; but perhaps the time to think seriously about robot rights and morality has come.