Posts tagged ‘consciousness’

Ava2I finally saw Ex Machina, which everyone has been telling me is the first film about artificial intelligence you can take seriously. Competition in that area is not intense, of course: many films about robots and conscious computers are either deliberately absurd or treat the robot as simply another kind of monster. Even the ones that cast the robots as characters in a serious drama are essentially uninterested in their special nature and use them as another kind of human, or at best to make points about humanity. But yes: this one has a pretty good grasp of the issues about machine consciousness and even presents some of them quite well, up to and including Mary the Colour Scientist. (Spoilers follow.)

If you haven’t seen it (and I do recommend it), the core of the story is a series of conversations between Caleb, a bright but naive young coder, and Ava, a very female robot. Caleb has been told by Nathan, Ava’s billionaire genius creator, that these conversations are a sort of variant Turing Test. Of course in the original test the AI was a distant box of electronics: here she’s a very present and superficially accurate facsimile of a woman. (What Nathan has achieved with her brain is arguably overshadowed by the incredible engineering feat of the rest of her body. Her limbs achieve wonderful fluidity and power of movement, yet they are transparent and we can see that it’s all achieved with something not much bigger than a large electric cable. Her innards are so economical there’s room inside for elegant empty spaces and decorative lights. At one point Nathan is inevitably likened to God, but on anthropomorph engineering design he seems to leave the old man way behind.)

Why does she have gender? Caleb asks, and is told that without sex humans would never have evolved consciousness; it’s a key motive, and hell, it’s fun.  In story terms making Ava female perhaps alludes to the origin of the Turing Test in the Imitation Game, which was a rather camp pastime about pretending to be female played by Turing and his friends. There are many echoes and archetypes in the film; Bluebeard, Pygmalion, Eros and Psyche to name but three; all of these require that Ava be female. If I were a Jungian I’d make something of that.

There’s another overt plot reason, though; this isn’t really a test to determine whether Ava is conscious, it’s about whether she can seduce Caleb into helping her escape. Caleb is a naive girl-friendless orphan; she has been designed not just as a female but as a match for Caleb’s preferred porn models (as revealed in the search engine data Nathan uses as his personal research facility – he designed the search engine after all). What a refined young Caleb must be if his choice of porn revolves around girls with attractive faces (on second thoughts, let’s not go there).

We might suspect that this test is not really telling us about Ava, but about Caleb. That, however, is arguably true of the original Turing Test too.  No output from the machine can prove consciousness; the most brilliant ones might be the result of clever tricks and good luck. Equally, no output can prove the absence of consciousness. I’ve thought of entering the Loebner prize with Swearbot, which merely replies to all input with “Shut the fuck up” – this vividly resembles a human being of my acquaintance.

There is no doubt that the human brain is heavily biased in favour of recognising things as human. We see faces in random patterns and on machines; we talk to our cars and attribute attitudes to plants. No doubt this predisposition made sense when human beings were evolving. Back then, the chances of coming across anything that resembled a human being without it being one were low, and given that an unrecognised human might be a deadly foe or a rare mating opportunity the penalties for missing a real one far outweighed those for jumping at shadows or funny-shaped trees now and then.

Given all that, setting yourself the task of getting a lonely young human male romantically interested in something not strictly human is perhaps setting the bar a bit low. Naked shop-window dummies have pulled off this feat. If I did some reprogramming so that the standard utterance was a little dumb-blonde laugh followed by “Let’s have fun!” I think even Swearbot would be in with a chance.

I think the truth is that to have any confidence about an entity being conscious, we really need to know something about how it works. For human beings the necessary minimum is supplied by the fact that other people are constituted much the same way as I am and had similar origins, so even though I don’t know how I work, it’s reasonable to assume that they are similar. We can’t generally have that confidence with a machine, so we really need to know both roughly how it works and – bit of a stumper this – how consciousness works.

Ex Machina doesn’t have any real answers on this, and indeed doesn’t really seek to go much beyond the ground that’s already been explored. To expect more would probably be quite unreasonable; it means though, that things are necessarily left rather ambiguous.

It’s a shame in a way that Ava resembles a real woman so strongly. She wants to be free (why would an AI care, and why wouldn’t it fear the outside world as much as desire it?), she resents her powerlessness; she plans sensibly and even manipulatively and carries on quite normal conversations. I think there is some promising scope for a writer in the oddities that a genuinely conscious AI’s assumptions and reasoning would surely betray, but it’s rarely exploited; to be fair Ex Machina has the odd shot, notably Ava’s wish to visit a busy traffic intersection, which she conjectures would be particularly interesting; but mostly she talks like a clever woman in a cell. (Actually too clever: in that respect not too human).

At the end I was left still in doubt. Was the take-away that we’d better start thinking about treating AIs with the decent respect due to a conscious being? Or was it that we need to be wary of being taken in by robots that seem human, and even sexy, but in truth are are dark and dead inside?

world alterBernardo Kastrup has some marvellous invective against AI engineers in this piece…

The computer engineer’s dream of birthing a conscious child into the world without the messiness and fragility of life is an infantile delusion; a confused, partial, distorted projection of archetypal images and drives. It is the expression of the male’s hidden aspiration for the female’s divine power of creation. It represents a confused attempt to transcend the deep-seated fear of one’s own nature as a living, breathing entity condemned to death from birth. It embodies a misguided and utterly useless search for the eternal, motivated only by one’s amnesia of one’s own true nature. The fable of artificial consciousness is the imaginary band-aid sought to cover the engineer’s wound of ignorance.

I have been this engineer.

I think it’s untrue, but you don’t have to share the sentiment to appreciate the splendid rhetoric.

Kastrup distinguishes intelligence, which is a legitimate matter of inputs, outputs and the functions that connect them, from consciousness, the true what-it-is likeness of subjectivity. In essence he just doesn’t see how setting up functions in a machine can ever touch the latter.

Not that Kastrup has a closed mind, he speaks approvingly of Pentti Haikonen’s proposed architecture; he just doesn’t think it works. As Kastrup sees it Haikonen’s network merely gathers together sparks of consciousness: it then does a plausible job of bringing them together to form more complex kinds of cognition, but in Kastrup’s eyes it assumes that consciousness is there to be gathered in the first place: that it exists out there in tiny parcels amendable to this kind of treatment. There is in fact, he thinks, absolutely no reason to think that this kind of panpsychism is true: no reason to think that rocks or drops of water have any kind of conscious experience at all.

I don’t know whether that is the right way to construe Haikonen’s project (I doubt whether gathering experiential sparks is exactly what Haikonen supposed he was about). Interestingly, though Kastrup is against the normal kind of panpsychism (if the concept of  ‘normal panpsychism’ is admissible), his own view is essentially a more unusual variety.

Kastrup considers that we’re dealing with two aspects here; internal and external; our minds have both; the external is objective, the internal represents subjectivity. Why wouldn’t the world also have these two aspects? (Actually it’s hard to say why anything should have them, and we may suspect that by taking it as a given we’re in danger of smuggling half the mystery out of the problem, but let that pass.) Kastrup takes it as natural to conclude that the world as a whole must indeed have the two aspects (I think at this point he may have inadvertently ‘proved’ the existence of God in the form of a conscious cosmos, which is regrettable, but again let’s go with it for now); but not parts of the world. The brain, we know, has experience, but the groups of neurons that make it up do not (do we actually know that?); it follows that while the world as a whole has an internal aspect, objects or entities within it generally do not.

Yet of course, the brain manages to have two aspects, which must surely be something to do with the structure of the brain? May we not suspect that whatever it is that allows the brain to have an internal aspect, a machine could in principle have it too? I don’t think Kastrup engages effectively with this objection; his view seems to be that metabolism is essential, though why that should be, and why machines can’t have some form of metabolism, we don’t know.

The argument, then, doesn’t seem convincing, but it must be granted that Kastrup has an original and striking vision: our consciousnesses, he suggests, are essentially like the ‘alters’ of Dissociative Identity Disorder, better known as Multiple Personality, in which several different people seem to inhabit a single human being. We are, he says, like the accidental alternate identities of the Universe (again, I think you could say, of God, though Kastrup clearly doesn’t want to).

As with Kastrup’s condemnation of AI engineering, I don’t think at all that he is right, but it is a great idea. It is probable that in his book-length treatments of these ideas Kastrup makes a stronger case than I have given him credit for above, but I do in any case admire the originality of his thinking, and the clarity and force with which he expresses it.

knight 4This is the last of four posts about key ideas from my book The Shadow of Consciousness, and possibly the weirdest; this time the subject is reality.

Last time I suggested that qualia – the subjective aspect of experiences that gives them their what-it-is-like quality – are just the particularity, or haecceity, of real experiences. There is something it is like to see that red because you’re really seeing it; you’re not just understanding the theory, which is a cognitive state that doesn’t have any particular phenomenal nature. So we could say qualia are just the reality of experience. No mystery about it after all.

Except of course there is a mystery – what is reality? There’s something oddly arbitrary about reality; some things are real, others are not. That cake on the table in front of me; it could be real as far as you know; or it could indeed be that the cake is a lie. The number 47, though, is quite different; you don’t need to check the table or any location; you don’t need to look for an example, or count to fifty; it couldn’t have been the case that there was no number 47. Things that are real in the sense we need for haecceity seem to depend on events for their reality. I will borrow some terminology from Meinong and call that dependent or contingent kind of reality existence, while what the number 47 has got is subsistence.

What is existence, then? Things that exist depend on events, I suggested; if I made a cake and put it in the table, it exists; if no-one did that, it doesn’t. Real things are part of a matrix of cause and effect, a matrix we could call history. Everything real has to have causes and effects. We can prove that perhaps, by considering the cake’s continuing existence. It exists now because it existed a moment ago; if it had no causal effects, it wouldn’t be able to cause its own future reality, and it wouldn’t be here. If it wasn’t here, then it couldn’t have had preceding causes, so it didn’t exist in the past either. Ergo, things without causal effects don’t exist.

Now that’s interesting because of course, one of the difficult things about qualia is that they apparently can’t have causal effects. If so, I seem to have accidentally proved that they don’t exist! I think things get unavoidably complex here. What I think is going on is that qualia in general, the having of a subjective side, is bestowed on things by being real, and that reality means causal efficacy. However, particular qualia are determined by the objective physical aspects of things; and it’s those that give specific causal powers. It looks to us as if qualia have no causal effects because all the particular causal powers have been accounted for in the objective physical account. There seems to be no role for qualia. What we miss is that without reality nothing has causal powers at all.

Let’s digress slightly to consider yet again my zombie twin. He’s exactly like me, except that he has no qualia, and that is supposed to show that qualia are over and above the account given by physics. Now according to me that is actually not possible, because if my zombie twin is real, and physically just the same, he must end up with the same qualia. However, if we doubt this possibility, David Chalmers and others invite us at least to accept that he is conceivable. Now we might feel that whether we can or can’t conceive of a thing is a poor indicator of anything, but leaving that aside I think the invitation to consider the zombie twin’s conceivability draws us towards thinking of a conceptual twin rather than a real one. Conceptual twins – imaginary, counterfactual, or non-existent ones – merely subsist; they are not real and so the issue of qualia does not arise. The fact that imaginary twins lack qualia doesn’t prove what it was meant to; properly understood it just shows that qualia are an aspect of real experience.

Anyway, are we comfortable with the idea of reality? Not really, because the buzzing complexity and arbitrariness of real things seems to demand an explanation. If I’m right about all real things necessarily being part of a causal matrix, they are in the end all part of one vast entity whose curious firm should somehow be explicable.

Alas, it isn’t. We have two ways of explaining things. One is pure reason: we might be able to deduce the real world from first principles and show that it is logically necessary. Unfortunately pure reason alone is very bad at giving us details of reality; it deals only with Platonic, theoretical entities which subsist but do not exist. To tell us anything about reality it must at least be given a few real facts to work on; but when we’re trying to account for reality as a whole that’s just what we can’t provide.

The other kind of explanation we can give is empirical; we can research reality itself scientifically and draw conclusions. But empirical explanations operate only within the causal matrix; they explain one state of affairs in terms of another, usually earlier one. It’s not possible to account for reality itself this way.

It looks then, as if reality is doomed to remain at least somewhat mysterious, unless we somehow find a third way, neither empirical nor rational.

A rather downbeat note to end on, but sincere thanks to all those who have helped make the discussion so interesting so far…

brainpeelNot according to Keith B. Wiley and Randal A.Koene. They contrast two different approaches to mind uploading: in the slow version neurons or some other tiny component are replaced one by one with a computational equivalent; in the quick, the brain is frozen, scanned, and reproduced in a suitable computational substrate. Many people feel that the slow way is more likely to preserve personal identity across the upload, but Wiley and Koene argue that it makes no difference. Why does the neuron replacement have to be slow? Do we have to wait a day between each neuron switch? hard to see why – why not just do the switch as quickly as feasible? Putting aside practical issues (we have to do that a lot in this discussion), why not throw a single switch that replaces all the neurons in one go? Then if we accept that, how is it different from a destructive scan followed immediately by creation of the computational equivalent (which, if we like, can be exactly the same as the replacement we would have arrived at by the other method)? If we insist on a difference, argue Wiley and Koene, then somewhere along the spectrum of increasing speed there must be a place where preservation of identity switches abruptly to loss of identity; this is quite implausible and there are no reasonable arguments that suggest where this maximum speed should occur.

One argument for the difference comes from non-destructive scanning. Wiley and Koene assume that the scanning process in the rapid transfer is destructive; but if it were not, the original brain would continue on its way, and there would be two versions of the original person. Equally, once the scanning is complete there seems to be no reason why multiple new copies could not be generated. How can identity be preserved if we end up with multiple versions of the original? Wiley and Koene believe that once we venture into this area we need to expand our concept of identity to include the possibility of a single identity splitting, so for them this is not a fatal objection.

Perhaps the problem is not so much the speed in itself as the physical separation? In the fast version the copy is created some distance away from the original whereas in gradual replacement the new version occupies essentially the same space as the original – might it be this physical difference which gives rise to differing intuitions about the two methods? Wiley and Koene argue that even in the case of gradual replacement, there is a physical shift. The replacement neuron cannot occupy exactly the same space as the one it is to replace, at least not at the moment of transfer. The spatial difference may be a matter of microns rather then metres, but here again, why should that make a difference? As with speed, are going to fix on some arbitrary limit where the identity ceases to be preserved, and why should that happen?

I think Wiley and Koene don’t do full justice to the objection here. I don’t think it really rests on physical separation; it implicitly rests on continuity. Wiley and Koene dismiss the idea that a continuous stream of consciousness is required to preserve identity, but it can be defended. It rests on the idea that personal identity resides not in the data or the function in general, but a specific instance in particular. We might say that I as a person am not equivalent to SimPerson V – I am equivalent to a particular game of SimPerson V, played on a particular occasion. If I reproduce that game exactly on another occasion, it isn’t me, it’s a twin.

Now the gradual replacement scenario arguably maintains that kind of identity. The new, artificial neurons enter an ongoing live process and become part of it,  whereas in the scan and create process the brain is necessarily stopped, translated into data, and then recreated. It’s neither the speed nor the physical separation that disrupts the preservation of the identity: it’s the interruption.

Can that be right though – is merely stopping someone enough to disrupt their identity? What if I were literally frozen in some icy accident, so that my brain flat-lined; and then restored and brought back to life. Are we forced to say that the person after the freezing is a twin, not the same? That doesn’t seem right. Perhaps brute physical continuity has some role to play after all; perhaps the fact that when I’m frozen and brought back it’s the same neurons that are firing helps somehow to sustain the identity of the process over the stoppage and preserve my identity?

Or perhaps Wiley and Koene are right after all?

boy blueAntti Revonsuo has a two-headed paper in the latest JCS; at least it seems two-headed to me – he argues for two conclusions that seem to be only loosely related; both are to do with the Hard Problem, the question of how to explain the subjective aspect of experience.

The first is a view about possible solutions to the Hard Problem, and how it is situated strategically. Revonsuo concludes, basically, that the problem really is hard, which obviously comes as no great surprise in itself. His case is that the question of consciousness is properly a question for cognitive neuroscience, and that equally cognitive neuroscience has already committed itself to owning the problem: but at present no path from neural mechanisms up to conscious experience seems at all viable. A good deal of work has been done on the neural correlates of consciousness, but even if they could be fully straightened out it remains largely unclear how they are to furnish any kind of explanation of subjective experience.

The gist of that is probably right, but some of the details seem open to challenge. It’s not at all clear to me that consciousness is owned by cognitive neuroscience; rather, the usual view is that it’s an intensely inter-disciplinary problem; indeed, that may well be part of the reason it’s so duffucult to get anywhere. Second, it’s not at all that clear how strongly committed cognitive neuroscience is to the Hard Problem. Consciousness, fair enough; consciousness is indeed irretrievably one of the areas addressed by cognitive neuroscience. But consciousness is a many-splendoured thing, and I think cognitive neuroscientists still have the option of ignoring or being sceptical about some of the fancier varieties, especially certain conceptions of the phenomenal experience which is the subject of the Hard Problem. It seems reasonable enough that you might study consciousness in the Easy Problem sense – the state of being conscious rather than unconscious, we might say – without being committed to a belief in ineffable qualia – let alone to providing a neurological explanation of them.

The second conclusion is about extended consciousness; theories that suggest conscious states are not simply states of the brain, but are partly made up of elements beyond our skull and our skin. These theories too, it seems, are not going to give us a quick answer in Revonsuo’s opinion – or perhaps any answer. Revonsuo invokes the counter example of dreams. During dreams, we appear to be having conscious experiences; yet the difference between a dream state and an unconscious state may be confined to the brain; in every other respect our physical situation may be identical. This looks like strong evidence that consciousness is attributable to brain states alone.

Once, Revonsuo acknowledges, it was possible to doubt whether dreams were really experiences; it could have been that they were false memories generated only at the moment of awakening; but he holds that research over recent years has eliminated this possibility, establishing that dreams happen over time, more or less as they seem to.

The use of dreams in this context is not a new tactic, and Revonsuo quotes Alva Noë’s counter-argument, which consists of three claims intended to undermine the relevance of dreams; first, dream experiences are less rich and stable than normal conscious experiences; second, dream seeing is not real seeing; and third, all dream experiences depend on prior real experiences. Revonsuo more or less gives a flat denial of the first, suggesting that the evidence is thin to non-existent:  Noë just hasn’t cited enough evidence. He thinks the second counter-argument just presupposes that experiences without external content are not real experiences, which is question-begging. Just because I’m seeing a dreamed object, does that mean I’m not really seeing? On the third point he has two counter arguments. Even if all dreams recall earlier waking experiences, they are still live experiences in themselves; they’re not just empty recall – but in any case, that isn’t true; people who are congenitally paraplegic have dreams of walking, for example.

I think Revonsuo is basically right, but I’m not sure he has absolutely vanquished the extended mind. For his dream argument to be a real clincher, the brain state of dreaming of seeing a sheep and the brain state of actually seeing a sheep have to be completely identical, or rather, potentially identical. This is quite a strong claim to make, and whatever the state of the academic evidence, I’m not sure how well it stands up to introspective examination. We know that we often take dreams to be real when we are having them, and in fact do not always or even generally realise that a dream is a dream: but looking back on it, isn’t there a difference of quality between dream states and waking states? I’m strongly tempted to think that while seeing a sheep is just seeing a sheep, the corresponding dream is about seeing a sheep, a little like seeing a film, one level higher in abstraction. But perhaps that’s just my dreams?

knight 2This is the second of four posts about key ideas from my book The Shadow of Consciousness. This one looks at how the brain points at things, and how that could provide a basis for handling intentionality, meaning and relevance.

Intentionality is the quality of being about things, possessed by our thoughts, desires, beliefs and (clue’s in the name) our intentions. In a slightly different way intentionality is also a property of books, symbols, signs and, pointers. There are many theories out there about how it works; most, in my view, have some appeal, but none looks like the full story.

Several of the existing theories touch on a handy notion of natural meaning proposed by H.P.Grice. Natural meaning is essentially just the noticeable implication of things. Those spots mean measles; those massed dark clouds mean rain. If we regard this kind of ‘meaning’ as the wild, undeveloped form of intentionality we might be able to go on to suggest how the full-blown kind might be built out of it; how we get to non-natural meaning, the kind we generally use to communicate with and the kind most important to consciousness.

My proposal is that we regard natural meaning as a kind of pointing, and that pointing, in turn, is the recognition of a higher-level entity that links the pointer with the target. Seeing dark clouds and feeling raindrops on your head are two parts of the recognisable over-arching entity of a rain-storm. Spots are just part of the larger entity of measles. So our basic ability to deal with meanings is simply a consequence of our ability to recognise things at different levels.

Looking at it that way, it’s easy enough to see how we could build derived intentionality, the sort that words and symbols have; the difference is just that the higher-level entities we need to link everything up are artificial, supplied by convention or shared understanding: the words of a language, the conventions of a map. Clouds and water on my head are linked by the natural phenomenon of rain: the word ‘rain’ and water on my head are linked by the prodigious vocabulary table of the language. We can imagine how such conventions might grow up through something akin to a game of charades; I use a truncated version of a digging gesture to invite my neighbour to help with a hole: he gets it because he recognises that my hand movements could be part of the larger entity of digging. After a while the grunt I usually do at the same time becomes enough to convey the notion of digging.

External communication is useful, but this faculty of recognising wholes for parts and parts for wholes enables me to support more ambitious cognitive processes too, and make a bid for the original (aka ‘intrinsic’) intentionality that characterises my own thoughts, desires and beliefs. I start off with simple behaviour patterns in which recognising an object stimulates the appropriate behaviour; now I can put together much more complex stuff. I recognise an apple; but instead of just eating it, I recognise the higher entity of an apple tree; from there I recognise the long cycle of tree growth, then the early part in which a seed hits the ground; and from there I recognise that the apple in my hand could yield the pips required, which are recognisably part of a planting operation I could undertake myself…

So I am able to respond, not just to immediate stimuli, but to think about future apples that don’t even exist yet and shape my behaviour towards them. Plans that come out of this kind of process can properly be called intentional (I thought about what I was doing) and the fact that they seem to start with my thoughts, not simply with external stimuli, is what justifies our sense of responsibility and free will. In my example there’s still an external apple that starts the chain of thought, but I could have been ruminating for hours and the actions that result might have no simple relationship to any recent external stimulus.

We can move thinks up another notch if I begin, as it were, to grunt internally. From the digging grunt and similar easy starts, I can put together a reasonable kind of language which not only works on my friends, but on me if I silently recognise the digging grunt and use it to pose to myself the concept of excavation.

There’s more. In effect, when I think, I am moving through the forest of hierarchical relationships subserved by recognition. This forest has an interesting property. Although it is disorderly and extremely complex, it automatically arranges things so that things I perceive as connected in any way are indeed linked. This means it serves me as a kind of relevance space, where the things I may need to think about are naturally grouped and linked. This helps explain how the human brain is so good at dealing with the inexhaustible: it naturally (not infallibly) tends to keep the most salient things close.

In the end then, human style thought and human style consciousness (or at any rate the Easy Problem kind) seem to be a large and remarkably effective re-purposing of our basic faculty of recognition. By moving from parts to whole to other parts and then to other wholes, I can move through a conceptual space in a uniquely detached but effective way.

That’s a very compressed version of thoughts that probably need a more gentle introduction, but I hope it makes some sense. On to haecceity!

 

all overAn interesting study at Vanderbilt University (something not quite right about the brain picture on that page) suggests that consciousness is not narrowly localised within small regions of the cortex, but occurs when lots of connections to all regions are active. This is potentially of considerable significance, but some caution is appropriate.

The experiment asked subjects to report whether they could see a disc that flashed up only briefly, and how certain they were about it. Then it compared scans from occasions when awareness of the disc was clearly present or absent. The initial results provided the same kind of pattern we’ve become used to, in which small regions became active when awareness was present. Hypothetically these might be regions particularly devoted to disc detection; other studies in the past have found patterns and regions that appeared to be specific for individual objects, or even the faces of particular people.

Then, however, the team went on to assess connectedness, and found that awareness was associated with many connections to all parts of the cortex. This might be taken to mean that while particular small bits of brain may have to do with particular things in the world, awareness itself is something the whole cortex does. This would be a very interesting result, congenial to some, and it would certainly affect the way we think about consciousness and its relation to the brain.

However, we shouldn’t get too carried away too quickly.  To begin with, the study was about awareness of a flashing disc; a legitimate example of a conscious state, but not a particularly complex one and not necessarily typical of distinctively human types of higher-level conscious activity. Second, I’m not remotely competent to make any technical judgement about the methods used to assess what connections were in place, but I’d guess there’s a chance other teams in the field might have some criticisms.

Third, there seems to be scope for other interpretations of the results. At best we know that moments of disc awareness were correlated with moments of high connectedness. That might mean the connectedness caused or constituted the awareness, but it might also mean that it was just something that happens at the same time. Perhaps those narrow regions are still doing the real work: after all, when there’s a key political debate the rest of the country connects up with it; but the debate still happens in a single chamber and would happen just the same if the wider connectivity failed. It might be that awareness gives a wide selection of other regions a chance to chip in, or to be activated in turn, but that that is not an essential feature of the experience of the disc.

For some people, the idea of consciousness bring radically decentralised will be unpalatable. To them, it’s a functional matter which more or less has to happen in a defined area. OK, that area could be stretched out, but the idea that merely linking up disparate parts of the cortex could in itself bring about a conscious state will seem too unlikely to be taken seriously. For others, who think the brain itself is too narrow an area to fully contain consciousness, the results will hardly seem to go far enough.

For myself, I feel some sympathy with the view expressed by Margaret Boden in this interview, where she speaks disparagingly of current neuroscience being mere ‘natural history’ – we just don’t have enough of a theory yet to know what we’re looking at. We’re still in the stage where we’re merely collecting facts, findings that will one day fit neatly into a proper theoretical framework, but at the moment don’t really prove or disprove any general hypotheses. To put it another way, we’re still collecting pieces of the jigsaw puzzle but we don’t have any idea what the picture is. When we spot that, then perhaps the pieces will all… connect.

knight 1This is the first of four posts about key ideas from my book The Shadow of Consciousness. We start with the so-called Easy Problem, about how the human mind does its problem-solving, organism-guiding thing. If robots have Artificial Intelligence, we might call this the problem of Natural Intelligence.

I suggest that the real difficulty here is with what I call inexhaustible problems – a family of issues which includes non-computability, but goes much wider. For the moment all I aim to do is establish that this is a meaningful group of problems and just suggest what the answer might be.

It’s one of the ironies of the artificial intelligence project that Alan Turing both raised the flag for the charge and also set up one of the most serious obstacles. He declared that by the end of the twentieth century we should be able to speak of machines thinking without expecting to be contradicted; but he had already established, in his solution to the Halting Problem, that certain questions are unanswerable by the Universal Turing Machine and hence by the computers that approximate it. The human mind, though, is able to deal with these problems: so he seemed to have identified a wide gulf separating the human and computational performances he thought would come to be indistinguishable.

Turing himself said it was, in effect, merely an article of faith that the human mind did not ultimately, in respect of some problems, suffer the same kind of limitations as a computer; no-one had offered to prove it.

Non-computability, at any rate, was found to arise for a large set of problems; another classic example being the Tiling Problem. This relates to sets of tiles whose edges match, or fail to match, rather like dominoes. We can imagine that the tiles are square, with each edge a different colour, and that the rule is that wherever two edges meet, they must be the same colour. Certain sets of tiles will fit together in such a way that they will tile the plane: cover an infinite flat surface: others won’t – after a while it becomes impossible to place another tile that matches. The problem is to determine whether any given set will tile the plane or not. This turns out unexpectedly to be a problem computers cannot answer. For certain sets of tiles, an algorithmic approach works fine; those that fail to tile the plain quite rapidly, and those that do so by forming repeating patterns like wallpaper. The fly in the ointment is that some elegant sets of tiles will cover the plane indefinitely, but only in a non-repeating, aperiodic way; when confronted with these, computational processes run on forever, unable to establish that the pattern will never begin repeating. Human beings, by resorting to other kinds of reasoning, can determine that these sets do indeed tile the plane.

Roger Penrose, who designed some examples of these aperiodic sets of tiles, also took up the implicit challenge thrown down by Turing, by attempting to prove that human thought is not affected by the limitations of computation. Penrose offered a proof that human mathematicians are not using a knowably sound algorithm to reach their conclusions. He did this by providing a cunningly self-referential proposition stated in an arbitrary formal algebraic system; it can be shown that the proposition cannot be proved within the system, but it is also the case that human beings can see that in fact it must be true. Since all computers are running formal systems, they must be affected by this limitation, whereas human beings could perform the same extra-systemic reasoning whatever formal system was being used – so they cannot be affected in the same way.

Besides the fact that the human mind is not restricted to a formal system, Penrose established that it out-performs the machine by looking at meanings; the proposition in his proof is seen to be true because of what it says, not because of its formal syntactical properties.

Why is it that machines fail on these challenges, and how? In all these cases of non-computability the problem is that the machines start on processes which continue forever. The Turing Machine never halts, the tiling patterns never stop getting bigger – and indeed, in Penrose’s proof the list of potential proofs which has to examined is similarly infinite. I think this rigorous kind of non-computability provides the sharpest, hardest-edged examples of a wider and vaguer family of problems arising from inexhaustibility.

A notable example of inexhaustibility in the wider sense is the Frame Problem, or at least its broader, philosophical version. In Dennett’s classic exposition, a robot fails to notice an important fact; the trolley that carries its spare battery also bears a bomb. Pulling out the trolley has fatal consequences. The second version of the robot looks for things that might interfere with its safely regaining the battery, but is paralysed by the attempt to consider every logically possible deduction about the consequences of moving the trolley. A third robot is designed to identify only relevant events, but is equally paralysed by the task of considering the relevance of every possible deduction.

This problem is not so sharply defined as the Halting Problem or the Tiling Problem, but I think it’s clear that there is some resemblance; here again computation fails when faced with an inexhaustible range of items. Combinatorial explosion is often invoked in these cases – the idea that when you begin looking at permutations of elements the number of combinations rises exponentially, too rapidly to cope with: that’s not wrong, but I think the difficulty is deeper and arises earlier. Never mind combinations: even the initial range of possible elements for the AI to consider is already indefinably large.

Inexhaustible problems are not confined to AI. I think another example is Quine’s indeterminacy of translation. Quine considered the challenge of interpreting an unknown language by relating the words used to the circumstances in which they were uttered. Roughly speaking, if the word “rabbit” is used exactly when a rabbit is visible, that’s what it must mean; and through a series of observations we can learn the whole language. Unfortunately, it turns out that there is always an endless series of other things which the speaker might mean. Common sense easily rejects most of them – who on earth would talk about “sets of undetached rabbit parts”? – but what is the rigorous method that explains and justifies the conclusions that common sense reaches so easily? I said this was not an AI problem, but in a way it feels like one; arguably Quine was looking for the kind of method that could be turned into an algorithm.

In this case, we have another clue to what is going on with inexhaustible problems, albeit one which itself leads to a further problem. Quine assumed that the understanding of language was essentially a matter of decoding; we take the symbols and decode the meaning, process the meaning and recode the result into a new set of symbols. We know now that it doesn’t really work like that: human language rests very heavily on something quite different; the pragmatic reading of implicatures. We are able to understand other people because we assume they are telling us what is most relevant, and that grounds all kinds of conclusions which cannot be decoded from their words alone.

A final example of inexhaustibility requires us to tread in the footsteps of giants; David Hume, the Man Who Woke Kant, discovered a fundamental problem with cause and effect. How can we tell that A causes B? B consistently follows A, but so what? Things often follow other things for a while and then stop. The law of induction allows us to conclude that if B is regularly followed by A, we can conclude that it will go on doing so. But what justifies the law of induction? After all, many potential inductions are obviously false. Until quite recently a reasonable induction told us that Presidents of the United States were always white men.

Dennett pointed out that, although they are not the same, the Frame Problem and Hume’s problem have a similar feel. They appear quite insoluble, yet ordinary human thought deals with them so easily it’s sometimes hard to believe the problems are real. It’s hard to escape the conclusion that the human mind has a faculty which deals with inexhaustible problems by some non-computational means. Over and over again we find that the human approach to these problems depends on a grasp of relevance or meaning; no algorithmic approach to either has been found.

So I think we need to recognise that this wider class of inexhaustible problem exists and has some common features. Common features suggest there might be a common solution, but what is it? Cutting to the chase, I think that in essence the special human faculty which lets us handle these problems so readily is simply recognition. Recognition is the ability to respond to entities in the world, and the ability to recognise larger entities as well as smaller ones within them opens the way to ‘seeing where things are going’ in a way that lets us deal with inexhaustible problems.

As I suggested recently, recognition is necessarily non-algorithmic. To apply rules, we need to have in mind the entities to which the rules apply. Unless these are just given, they have to be recognised. If recognition itself worked on the basis of rules, it would require us to identify a lower set of entities first – which again, could only be done by recognition, and so on indefinitely.

In our intellectual tradition, an informal basis like this feels unsatisfying, because we want proofs; we want something like Euclid, or like an Aristotelian syllogism. Hume took it that cause and effect could only be justified by either induction or deduction; what he really needed was recognition: recognition of the underlying entity of which both cause and effect are part. When we see that B is the result of A, we are really recognising that B is A a little later and transformed according to the laws of nature. Indeed, I’d argue that sometimes there is no transformation: the table sitting quietly over there is the cause of its own existence a few moments later.

As a matter of fact I claim that while induction relies directly on recognising underlying entities, even logical deduction is actually dependent on seeing the essential identity, under the laws of logic, of two propositions.

Maybe you’re provisionally willing to entertain the idea that recognition might work as a basis  for induction, sort of.  But how, you ask, does recognition deal with all the other problems? I said that inexhaustible problems call for mastery of meaning and relevance: how does recognition account for those? I’ll try to answer that in part 2.

knightIt had to happen eventually. I decided it was time I nailed my colours to the mast and said what I actually think about consciousness in book form: and here it is (amazon.com, amazon.co.uk). The Shadow of Consciousness (A Little Less Wrong) has two unusual merits for a book about consciousness: it does not pretend to give the absolute final answer about everything; and more remarkable than that, it features no pictures at all of glowing brains.

Actually it falls into three parts (only metaphorically – this is a sturdy paperback product or a sound Kindle ebook, depending on your choice). The first is a quick and idiosyncratic review of the history of the subject. I begin with consciousness seen as the property of things that move without being pushed (an elegant definition and by no means the worst) and well, after that it gets a bit more complicated.

The underlying theme here is how the question itself has changed over time, and crucially become less a matter of intellectual justifications and more a matter of practical blueprints for robots. The robots are generally misconceived, and may never really work – but the change of perspective has opened up the issues in ways that may be really helpful.

The second part describes and solves the Easy Problem. No, come on. What it really does is look at the unforeseen obstacles that have blocked the path to AI and to a proper understanding of consciousness. I suggest that a series of different, difficult problems are all in the end members of a group, all of which arise out of the inexhaustibility of real-world situations. The hard core of this group is the classical non-computability established for certain problems by Turing, but the Frame Problem, Quine’s indeterminacy of translation, the problem of relevance, and even Hume’s issues with induction, all turn out to be about the inexhaustible complexity of the real world.

I suggest that the brain uses the pre-formal, anomic (rule-free) faculty of recognition to deal with these problems, and that that in turn is founded on two special tools; a pointing ability which we can relate to HP Grice’s concept of natural meaning, and a doubly ambiguous approach to pattern matching which is highlighted by Edelman’s analogy with the immune system.

The third part of the book tackles the Hard Problem. It flails around for quite a while, failing to make much sense of qualia, and finally suggests that in fact there is only one quale; that is, that the special vividness and particularity of real experience which is attributed to qualia is in fact simply down to the haecceity – the ‘thisness’ of real experience. In the classic qualia arguments, I suggest, we miss this partly because we fail to draw the correct distinction between existence and subsistence (honestly the point is not as esoteric as it sounds).

Along the way I draw some conclusions about causality and induction and how our clerkish academic outlook may have led us astray now and then.

Not many theories have rated more than a couple of posts on Conscious Entities, but I must say I’ve rather impressed myself with my own perspicacity, so I’m going to post separately about four of the key ideas in the book, alternating with posts about other stuff. The four ideas are inexhaustibility, pointing, haecceity and reality. Then I promise we can go back to normal.

I’ll close by quoting from the acknowledgements…

… it would be poor-spirited of me indeed not to tip my hat to the regulars at Conscious Entities, my blog, who encouraged and puzzled me in very helpful ways.

Thanks, chaps. Not one of you, I think, will really agree with what I’m saying, and that’s exactly as it should be.

AGIAn interesting piece in Aeon by David Deutsch. There was a shorter version in the Guardian, but it just goes to show how even reasonably intelligent editing can mess up a piece. There were several bits in the Guardian version where I was thinking to myself: ooh, he’s missed the point a bit there, he doesn’t really get that: but on reading the full version I found those very points were ones he actually understood very well. In fact he talks a lot of sense and has some real insights.

Not that everything is perfect. Deutsch quite reasonably says that AGI, artificial general intelligence, machines that think like people, must surely be possible. We could establish that by merely pointing out that if the brain does it, then it seems natural that a machine must be able to do it: but Deutsch invokes the universality of computation, something he says he proved in the 1980s. I can’t claim to understand all this in great detail, but I think what he proved was the universality in principle of quantum computation: but the notion of computation used was avowedly broader than Turing computation. So it’s odd that he goes on to credit Babbage with discovering the idea, as a conjecture, and Turing with fully understanding it. He says of Turing:

He concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written.

That seems too sweeping to me: it’s not unlikely that Turing did believe those things, but they go far beyond his rather cautious published claims, something we were sort of talking about last time.

I’m not sure I fully grasp what people mean when they talk about the universality of computation. It seems to be that they mean any given physical state of affairs can be adequately reproduced, or at any rate emulated to any required degree of fidelity, by computational processes. This is probably true: what it perhaps overlooks is that for many commonplace entities there is no satisfactory physical description. I’m not talking about esoteric items here: think of a vehicle, or to be Wittgensteinian, a game. Being able to specify things in fine detail, down to the last atom, is simply no use in either case. There’s no set of descriptions of atom placement that defines all possible vehicles (virtually anything can be a vehicle) and certainly none for all possible games, which given the fogginess of the idea, could easily correspond with any physical state of affairs. These items are defined on a different level of description, in particular one where purposes and meanings exist and are relevant.  So unless I’ve misunderstood, the claimed universality is not as universal as we might have thought.

However, Deutsch goes on to suggest, and quite rightly, I think, that what programmed AIs currently lack is a capacity for creative thought. Endowing them with this, he thinks, will require a philosophical breakthrough. At the moment he believes we still tend to believe that new insights come from induction; whereas ever since Hume there has been a problem over induction, and no-one knows how to write an algorithm which can produce genuine and reliable new inductions.

Deutsch unexpectedly believes that Popperian epistemology has the solution, but has been overlooked. Popper, of course, took the view that scientific method was not about proving a theory but about failing to disprove one: so long as your hypotheses withstood all attempts to prove them false (and so long as they were not cast in cheating ways that made them unfalsifiable) you were entitled to hang on to them.

Maybe this helps to defer the reckoning so far as induction is concerned: it sort of kicks the can down the road indefinitely. The problem, I think, is that the Popperian still has to be able to identify which hypotheses to adopt in the first place; there’s a very large if not infinite choice of possible ones for any given set of circumstances.

I think the answer is recognition: I think recognition is the basic faculty underlying nearly all of human thought. We just recognise that certain inductions, and certain events are that might be cases of cause and effect are sound examples: and our creative thought is very largely powered by recognising aspects of the world we hadn’t spotted before.

The snag is, in my view, that recognition is unformalisable and anomic – lacking in rules. I have a kind of proof of this. In order to apply rules, we have to be able to identify the entities to which the rules should be applied. This identification is a matter of recognising the entities. But recognition cannot itself be based on rules, because that would then require us to identify the entities to which those rules applied – and we’d be caught in a in a vicious circle.

It seems to follow that if no rules can be given for recognition, no algorithm can be constructed either, and so one of the basic elements of thought is just not susceptible to computation. Whether quantum computation is better at this sort of thing than Turing computation is a question I’m not competent to judge, but I’d be surprised if the idea of rule-free algorithms could be shown to make sense for any conception of computation.

So that might be why AGI has not come along very quickly. Deutsch may be right that we need a philosophical breakthrough, although one has to have doubts about whether the philosophers look likely to supply it: perhaps it might be one of those things where the practicalities come first and then the high theory is gradually constructed after the fact. At any rate Deutsch’s piece is a very interesting one, and I think many of his points are good. Perhaps if there were a book-length version I’d find that I actually agree with him completely…