Robot Memory

A new model for robot memory raises some interesting issues. It’s based on three networks, for identification , localization, and working memory. I have a lot of quibbles, but the overall direction looks promising.

The authors (Christian Balkenius, Trond A. Tjøstheim, Birger Johansson and Peter Gärdenfors)begin by boldly proposing four kinds of content for consciousness; emotions, sensations, perceptions (ie, interpreted sensations), and imaginations. They think that may be the order in which each kind of content appeared during evolution. Of course this could be challenged in various ways. The borderline between sensations and perceptions is fuzzy (I can imagine some arguing that there are no uninterpreted sensations  in consciousness, and the degree of interpretation certainly varies greatly), and imagination here covers every kind of content which is about objects not present to the senses, especially the kind of foresight which enables planning. That’s a lot of things to pack into one category. However, the structure is essentially very reasonable.

Imaginations and perceptions together make up an ‘inner world’. The authors say this is ess3ntial for consciousness, though they seem to have also said that pure emotion is an early content of consciousness. They propose two tests often used on infants as indicators of such an inner world; tests of the sense of object permanence and ‘A-not-B’. Both essentially test whether infants (or other cognitive entities) have an ongoing mental model of things which goes beyond what they can directly see. This requires a kind of memory to keep track of the objects that are no longer directly visible, and of their location. The aim of the article is to propose a system for robots that establishes this kind of memory-based inner world.

Imitating the human approach is an interesting and sensible strategy. One pitfall for those trying to build robot consciousness is the temptation to use the power of computers in non-human ways. We need our robot to do arithmetic: no problem! Computers can already do arithmetic much faster and more accurately than mere humans, so we just slap in a calculator module. But that isn’t at all the way the human brain tackles explicit arithmetic, and by not following the human model you risk big problems  later.

Much the same is true of memory. Computers can record data in huge quantities with great accuracy and very rapid recall; they are not prone to confabulation, false memories, or vagueness. Why not take advantage of that? But human memory is much less clear-cut; in fact ‘memory’ may be almost as much of a ‘mongrel’ term as consciousness, covering all sorts of abilities to repeat behaviour or summon different contents. I used to work in an office whose doors required a four-digit code. After a week or so we all tapped out each code without thinking, and if we had to tell someone what the digits were we would be obliged to mime the action in mid-air and work out which numbers on the keypad our fingers would have been hitting. In effect, we were using ‘muscle memory’ to recall a four-digit number.

The authors of the article want to produce the same kind of ‘inner world’ used in human thought to support foresight and novel combinations. (They seem to subscribe to an old theory that says new ideas can only be recombinations of things that got into the mind through the senses. We can imagine a gryphon that combines lion and eagle, but not a new beast whose parts resemble nothing  we have ever seen. This is another point I would quibble over, but let it pass.)

In fact, the three networks proposed by the authors correspond plausibly with three brain regions; the ventral, dorsal, and prefrontal areas of the cortex. They go on to sketch how the three networks play their role and report tests that show appropriate responses in respect of object permanence and other features of conscious cognition. Interestingly, they suggest that daydreaming arises naturally within their model and can be seen as a function that just arises unavoidably out of the way the architecture works, rather than being something selected for by evolution.

I’m sometimes sceptical about the role of explicit modelling in conscious processes, as I think it is easily overstated. But I’m comfortable with what’s being suggested here. There is more to be said about how processes like these, which in the first instance deal with concrete objects in the environment, can develop to handle more abstract entities; but you can’t deal with everything in detail in a brief article, and I’m happy that there are very believable development paths that lead naturally to high levels of abstraction.

At the end of the day, we have to ask: is this really consciousness? Yes and no, I’m afraid. Early on in the piece we find:

On the first level, consciousness contains sensations. Our subjective world of experiences is full of them: tastes, smells, colors, itches, pains, sensations of cold, sounds, and so on. This is what philosophers of mind call qualia.

Well, maybe not quite. Sensations, as usually understood, are objective parts of the physical world (though they may be ones with a unique subjective aspect), processes or events which are open to third-person investigation. Qualia are not. It is possible to simply identify qualia with sensations, but that is a reductive, sceptical view. Zombie twin, as I understand him, has sensations, but he does not have qualia.

So what we have here is not a discussion of ‘Hard Problem’ consciousness, and it doesn’t help us in that regard. That’s not a problem; if the sceptics are right, there’s no Hard stuff to account for anyway; and even if the qualophiles are right, an account of the objective physical side of cognition is still a major achievement. As we’ve noted before, the ‘Easy Problem’ ain’t easy…

Forgotten Crimes

Should people be punished for crimes they don’t remember committing? Helen Beebee asks this meaty question.

Why shouldn’t they? I take the argument to have two main points. First, because they don’t remember, it can be argued that they are no longer the same person as the one who committed the crime. Second, if you’re not the person who committed the crime, you can’t be responsible and therefore should not be punished. Both of these points can be challenged.

The idea that not remembering makes you a different person takes memory to be the key criterion of personal identity, a view associated with John Locke among others. But memory is in practice a very poor criterion. If I remember later, do I then become responsible for the crime? We remember unconsciously things we cannot recall explicitly; does unconscious memory count, and if so, how would we know? If I remember only unconsciously, is my unconscious self the same while my conscious one is not, so that perhaps I ought to suffer punishment I’m only aware of unconsciously? If I do not remember details, but have that sick sense of certainty that, yes, I did it alright, am I near enough the same person? What if I have a false, confabulated memory of the crime, but one that happens to be veridical, to tell the essential truth, if inaccurately? Am I responsible? If so, and if false memories will therefore do, then ought I to be held responsible even if in fact I did not commit the crime, so long as I ‘remember’ doing it?

Moreover, aren’t the practical consequences unacceptable? If forgetting the crime exculpates me, I can commit a murder and immediately take mnestic drugs that will make me forget it. If that tactic is itself punishable, I can take a few more drugs and forget even coming up with the idea. Surely few people think it really works as selectively as that. In order to be free of blame, you really need to be a different person, and that implies losing much more than a single memory. Perhaps it requires the loss of most memories, or more plausibly a loss of mentally retained things that go a lot wider than mere factual or experiential memory; my habits of thought, or the continuity of my personality. I think it’s possible that Locke would say something like this if he were still around. So perhaps the case ought to be that if you do not remember the crime, and other features of your self have suffered an unusual discontinuity, such that you would no longer commit a similar crime in similar circumstances, then you are off the hook. How we could establish such a thing forensically is quite another matter, of course.

What about the second point, though? Does the fact that I am now a different, and also a better person, one who doesn’t remember the crime, mean I shouldn’t be punished? Not necessarily. Legally, for example, we might look to the doctrines of joint enterprise and strict liability to show that I can sometimes be held responsible in some degree for crimes I did not directly commit, and even ones which I was powerless to prevent, if I am nevertheless associated with the crime in the required ways.

It partly depends on why we think people should be punished at all. Deterrence is a popular justification, but it does not require that I am really responsible.  Being punished for a crime may well deter me and others from attempting similar crimes in future, even if I didn’t do it at all, never mind cases where my responsibility is merely attenuated by loss of memory. The prevention of revenge is another justification that doesn’t necessarily require me to have been fully guilty. Or there might be doctrines of simple justice that hold to the idea of crime being followed by punishment, not because of any consequences that might follow, but just as a primary ethical principle. Under such a justification, it may not matter whether I am responsible in any strong sense. Oedipus did not know he had killed his father, and so could not be held responsible for patricide, at lest on most modern understandings; but he still put out his own eyes.

Much more could be said about all that, but for me the foregoing arguments are enough to suggest that memory is not really the point, either for responsibility or for personal identity. Beebee presents an argument about Bruce Banner and the Hulk; she feels Banner cannot directly be held responsible for the mayhem caused by the Hulk. Perhaps not, but surely the issue there is control, not memory. It’s irrelevant whether Banner remembers what the Hulk did, all that matters is whether he could have prevented it. Beebee makes the case for a limited version of responsibility which applies if Banner can prevent the metamorphosis into Hulk in the first place, but I think we have already moved beyond memory, so the fact that this special responsibility does not apply in the real life case she mentions is not decisive.

One point which I think should be added to the account, though it too is not decisive, is that the loss of responsibility may entail loss of personhood in a wider sense. If we hold that you are no longer the person who committed the crime, you are not entitled to their belongings or rights either. You are not married to their spouse, nor the heir to their parents. Moreover, if we think you are liable to turn into someone else again at some stage, and we know that criminals are, as it were, in your repertoire of personalities, we may feel justified in locking you up anyway; not as a punishment, but as prudent prevention. To avoid consequences like these and retain our integrity as agents, we may feel it is worth claiming our responsibility for certain past crimes, even if we no longer recall them.

Not really feeling it

It’s not just that we don’t know how anaesthetics work – we don’t even know for sure that they work. Joshua Rothman’s review of a new book on the subject by Kate Cole-Adams quotes poignant stories of people on the operating table who may have been aware of what was going on. In some cases the chance remarks of medical staff seem to have worked almost like post-hypnotic suggestions: so perhaps all surgeons should loudly say that the patient is going to recover and feel better than ever, with new energy and confidence.

How is it that after all this time, we don’t know how anaesthetics work? As the piece aptly remarks, it’s about losing consciousness, and since we don’t know clearly what that is or how we come to have it, it’s no surprise that its suspension is also hard to understand. To add to the confusion, it seems that common anaesthetics paralyse plants, too. Surely it’s our nervous system anaesthetics mainly affect – but plants don’t even have a nervous system!

But come on, don’t we at least know that it really does work? Most of us have been through it, after all, and few have weird experiences; we just don’t feel the pain – or anything. The problem, as we’ve discussed before, is telling whether we don’t feel the pain, or whether we feel it but don’t remember it. This is an example of a philosophical problem that is far from being a purely academic matter.

It seems anaesthetics really do (at least) three different things. They paralyse the patient, making it easier to cut into them without adverse reactions, they remove conscious awareness or modulate it (it seems some drugs don’t stop you being aware of the pain, they just stop you caring about it somehow), and they stop the recording of memories, so you don’t recall the pain afterwards. Anaesthetists have a range of drugs to produce each of these effects. In many cases there is little doubt about their effectiveness. If a drug leaves you awake but feeling no pain, or if it simply leaves you with no memory, there’s not that much scope for argument. The problem arises when it comes to anaesthetics that are supposed to ‘knock you out’. The received wisdom is that they just blank out your awareness for a period, but as the review points out, there are some indications that instead they merely paralyse you and wipe your memory. The medical profession doesn’t have a good record of taking these issues very seriously; I’ve read that for years children were operated on after being given drugs that were known to do little more than paralyse them (hey, kids don’t feel pain, not really; next thing you’ll be telling me plants do…).

Actually, views about this are split; a considerable proportion of people take the view that if their memory is wiped, they don’t really care about having been in pain. It’s not a view I share (I’m an unashamed coward when it comes to pain), but it has some interesting implications. If we can make a painful operation OK by giving mnestics to remove all recollection, perhaps we should routinely do the same for victims of accidents. Or do doctors sometimes do that already…?

Replicant identity

The new Blade Runner film has generated fresh interest in the original film; over on IAI Helen Beebee considers how it nicely illustrates the concept of ‘q-memories’.

This relates to the long-established philosophical issue of personal identity; what makes me me, and what makes me the same person as the one who posted last week, or the same person as that child in Bedford years ago? One answer which has been a leading contender at least since Locke is memory; my memories together constitute my identity.

Memories are certainly used as a practical way of establishing identity, whether it be in probing the claims of a supposed long-lost relative or just testing your recall of the hundreds of passwords modern life requires. It is sort of plausible that if all you memories were erased you would become new person with a fresh start; there have been cases of people who lost decades of memory and underwent personality change, identifying with their own children more readily than their now wrinkly-seeming spouses.

There are various problems with memory as a criterion of identity, though. One is the point that it seems to be circular. We can’t use your memories to validate your identity because in accepting them as your memories we are already implicitly taking you to be the earlier person they come from. If they didn’t come from that person they aren’t validly memories. To get round this objection Shoemaker and Parfit adopted the concept of quasi- or q-memories. Q-memories are like memories but need not relate to any experience you ever had. That, of course, is too loose, allowing delusions to be used as criteria of identity, so it is further specified that q-memories must relate to an experience someone had, and must have been acquired by you in an appropriate way. The appropriate ways are ones that causally relate to the original experience in a suitable fashion, so that it’s no good having q-memories that just happen to match some of King Charles’s. You don’t have to be King Charles, but the q-memories must somehow have got out of his head and into yours through a proper causal sequence.

This is where Blade Runner comes in, because the replicant Rachael appears to be a pretty pure case of q-memory identity. All of her memories, except the most recent ones, are someone else’s; and we presume they were duly copied and implanted in a way that provides the sort of causal connection we need.

This opens up a lot of questions, some of which are flagged up by Beebee. But  what about q-memories? Do they work? We might suspect that the part about an appropriate causal connection is a weak spot. What’s appropriate? Don’t Shoemaker and Parfit have to steer a tricky course here between the Scylla of weird results if their rules are too loose, and the Charybdis of bringing back the circularity if they are too tight? Perhaps, but I think we have to remember that they don’t really want to do anything very radical with q-memories; really you could argue it’s no more than a terminological specification, giving them license to talk of memories without some of the normal implications.

In a different way the case of Rachael actually exposes a weak part of many arguments about memory and identity; the easy assumption that memories are distinct items that can be copied from one mind to another. Philosophers, used to being able to specify whatever mad conditions they want for their thought-experiments, have been helping themselves to this assumption for a long time, and the advent of the computational metaphor for the mind has done nothing to discourage them. It is, however, almost certainly a false assumption.

At the back of our minds when we think like this is a model of memory as a list of well-formed propositions in some regular encoding. In fact, though, much of what we remember is implicit; you recall that zebras don’t wear waistcoats though it’s completely implausible that that fact was recorded anywhere in your brain explicitly. There need be nothing magic about this. Suppose we remember a picture; how many facts does the picture contain? We can instantly come up with an endless list of facts about the relations of items in the picture, but none were encoded as propositions. Does the Mona Lisa have her right hand over her left, or vice versa? You may never have thought about it, but be easily able to recall which way it is. In a computer the picture might be encoded as a bitmap; in our brain we don’t really know, but plausibly it might be encoded as a capacity to replay certain neural firing sequences, namely those that were caused by the original experience. If we replay the experience neurally, we can sort of have the experience again and draw new facts from it the way we could from summoning up a picture; indeed that might be exactly what we are doing.

But my neurons are not wired up like yours, and it is vanishingly unlikely that we could identify direct equivalents of specific neurons between brains, let alone whole firing sequences. My memories are recorded in a way that is specific to my brain, and they cannot be read directly across into yours.

Of course, replicants may be quite different. It’s likely enough that their brains, however they work, are standardised and perhaps use a regular encoding which engineers can easily read off. But if they work differently from human brains, then it seems to follow that they can’t have the same memories; to have the same memories they would have to be an unbelievably perfect copy of the ‘donor’ brain.

That actually means that memories are in a way a brilliant criterion of personal identity, but only in a fairly useless sense.

However, let me briefly put a completely different argument in a radically different direction. We cannot upload memories, but we know that we can generate false ones by talking to subjects or presenting fake evidence. What does that tell us about memories? I submit it suggests that memories are in essence beliefs, beliefs about what happened in the past. Now we might object that there is typically some accompanying phenomenology. We don’t just remember that we went to the mall, we remember a bit of what it looked like, and other experiential details. But I claim that our minds readily furnish that accompanying phenomenology through confabulation, given the belief, and in fact that a great deal of the phenomenological dressing of all memories, even true ones, is actually confected.

But I would further argue that the malleability of beliefs means that they are completely unsuitable as criteria of identity; it follows that memories are similarly unsuitable, so we have been on the wrong track throughout. (Regular readers may know that in fact I subscribe to a view regarded by most as intolerably crude; that human beings are physical objects like any other and have essentially the same criteria of identity.)

 

Self denial

LorenzoConsciousness, as we’ve noted before, is a most interdisciplinary topic, and besides the neurologists, the philosophers, the AI people, the psychologists and so on, the novelists have also, in their rigourless way, delved deep into the matter. Ever since the James boys (William and Henry) started their twin-track investigation there has been an intermittent interchange between the arts and the sciences. Academics like Dan Lloyd have written novels, novelists like our friend Scott Bakker have turned their hand to serious theory.

Recently we seem to have had a new genre of invented brain science. We could include Ian McEwan’s fake paper on De Clerambault syndrome, appended to Enduring Love; recently Sebastian Faulks gave us Glockner’s Isthmus; now, in his new novel A Box of Birds Charles Fernyhough gives us the Lorenzo Circuit.

The Lorenzo Circuit is a supposed structure which pulls together items from various parts of the brain and uses them to constitute memories. It’s sort of assumed that the same function thereby provides consciousness and the sense of self. Since it seems unlikely that a distinct brain structure could have escaped notice this long, we must take it that the Lorenzo is a relatively subtle feature of the connectome, only identifiable through advanced scanning techniques. The Lycée, which despite its name seems to be an English university, has succeeded in mapping the circuit in detail, while Sansom, one of those large malevolent corporate entities that crop up in thrillers, has developed new electrode technology which allows safe and detailed long-term interference with neurons. It’s obvious to everyone that if brought together these two discoveries would provide a potent new technology; a cure for Alzheimer’s is what seems to be at the forefront of everyone’s minds, though I would have thought there were far wilder and more exciting possibilities. The story revolves around the narrator, Dr Yvonne Churcher, an academic at the Lycée, and two of her undergraduate students, Gareth and James.

Unfortunately I didn’t rate the book all that highly as a novel. The plot is put together out of slightly corny thrillerish elements and seems a bit loosely managed. I didn’t like the characters much either. Yvonne seems to be putty in the hands of her students, letting Gareth steal the Lycée’s crucial research without seeming to hold the betrayal of her trust against him at all, and being readily seduced by the negligent James, a nonsense-talking cult member who calls her ‘babe’ (ack!). I’ve seen Gareth described as a “brilliant” character in reviews elsewhere, but sadly not much brilliance seems to be on offer. In fact to be brutal he seemed to me quite a convincing depiction of the kind of student who sits at the back of lectures chuckling to himself for no obvious reason and ultimately requires pastoral intervention. Apart from nicking other people’s theories and data, his ideas seem to consist of a metaphor from Plato, which he interprets with dismal literalism.

This metaphor is the birds thing that provides the title and up to a point, the theme of the book. In the Theaetetus, Plato makes a point about how we can possess knowledge without having it actually in our consciousness by comparing it to owning an aviary of birds without having them actually in your hand. In Plato’s version there’s no doubt that there’s a man in the aviary who chooses the birds to catch; here I think the idea is more that he flocking and movement of the birds itself produces higher-level organisation analogous to conscious memory.

Yvonne is a pretty resolute sceptic about her own selfhood; she can’t see that she is anything beyond the chance neurochemical events which sweep through her brain. This might indeed explain her apparent passivity and the way she seems to drift through even the most alarming and hare-brained adventures, though if so it’s a salutary warning about the damaging potential of overdosing on materialism. Overall the book alludes to more issues than it really discusses, and gives us little side treats like a person whose existence turns out to be no more than a kind of narrative convention; perhaps it’s best approached as a potential thought provoker rather than the adumbration of a single settled theory; not necessarily a bad thing for a book to be.

Yvonne’s scepticism did cause me to realise that I was actually rather hazy on the subject; what is it that people who deny the self are actually denying, and are they all denying the same thing? There are actually quite a few options.

  • I think all self-sceptics want to deny the existence of the traditional immaterial soul, and for some that may really be about all. (To digress a bit, there are actually caverns below us at this point which have not been explored for thousands of years, if ever: if we were ancient Egyptians, with their complex ontology of multiple souls, we should have a large range of sceptical permutations available; denying the ba while affirming the khaibit, say. Our simpler culture, perhaps mercifully, does not offer us such a range of refinedly esoteric entities in which to disbelieve, but those of a philosophical temperament may be inclined to cast a regretful glance towards those profoundly obscure imaginary galleries.)
  • Some may want to deny any sense, or feeling, of self; like Hume they see only a bundle of sensations when they look inside themselves. I think there is arguably a quale of the self; but these people would not accept it.
  • Others, by contrast, would affirm that the sense of self is vivid, just not veridical. We think there’s a self, but there’s nothing actually there. There’s scope for an interesting discussion about what would have to be there in order to prove them wrong – or whether having the sense of self itself constitutes the self.
  • Some would say that there is indeed ‘something’ there; it just isn’t what we think it is. For example, there might indeed be a centre of experience, but an epiphenomenal one; a self who has no influence on events but is in reality just along for the ride.
  • Logically I suppose we could invert that to have a self that really did make the decisions, but was deluded about having any experiences. I don’t think that would be a popular option, though.
  • Some would make the self a purely social construct, a matter of legal and moral rights and privileges, a conception simply grafted on to an animal which in itself, or by itself, would lack it.
  • Some would deny only that the self provides a break in the natural chain of cause and effect. We are not really the origin of anything, they would say, and our impression of being a freely willing being is mistaken.
  • Some radical sceptics would deny that even the body has any particular selfhood; over time every part of it changes and to assert that I am the same self as the person of twenty years ago makes no sense.

As someone who, on the whole, prefers to look for a tenable account of the reality of the self, the richness of the sceptical repertoire makes me feel rather unimaginative.

Knowing

Picture: writing. Carl Zimmer described some interesting research in a recent blog entry . It seems that people who are unable to recall any of the events of their past lives are still able to identify which of a list of words best describes them as people: although their explicit knowledge of their own autobiographies has disappeared, they still have self-knowledge in a different form. It is suggested that two different brain systems are involved. This research might possibly shed a chink of light on the debate about whether, and in what sense, we actually have selves, but it also raises the thorny question of different ways of knowing things. We often talk about knowing things as though knowledge was a straightforward phenomenon, but it actually covers a range of different abilities – look at the following examples.

  1. I know what the capital of Ecuador is.
  2. I know where the keys are.
  3. I know how to sign my name.
  4. I know that zebras don’t wear waistcoats

The first example is the case in which an explicit fact has been memorised – possibly even a fixed formula (“The capital of Ecuador is Quito.”). This is perhaps the easiest form of knowledge to deal with in a computational way – we just have to ensure that the relevant string of characters (“Quito”) or the appropriate digits are stored in a suitable location. There’s relatively little mystery about how you can articulate this kind of knowledge, since it has probably been saved in an articulated form already: it was words on the way in, so it’s no surprise that we can provide words when it’s on the way out.

In the second case, things are slightly less clear. It’s unlikely, unless you have a really bad key-losing problem, that you have memorised an explicit description of the place where they are: however, if you need to produce such a description, you would normally have no particular difficulty in doing so. A reasonable assumption here might be that the relevant data on the position of the keys are still stored somewhere explicitly, (on some sort of map, as co-ordinates, or perhaps more likely, as a set of instructions like those on pirate’s treasure maps, telling you how to get to the treasure/keys). The question of how you are able to translate this inner data into a verbal description when you need one is less easily answered, but then, the process of coming up with a description of anything is not exactly well understood, either.

The third case is a bit different. The importance of ‘knowing how’ as a form of knowledge was emphasised by Gilbert Ryle as part of his efforts to debunk the ‘Ghost in the Machine’. It could legitimately be argued that the difference between ‘knowing how’ and ‘knowing that’ is so great that it makes no sense to consider them together – but both do allow some stored knowledge to influence current behaviour, so there is at least that broad similarity. You sign your name without hesitation, but describing how to do it (unless you happen to be called ‘O’) is challenging. To describe the required series of upstrokes and downstrokes would require careful thought – you might even have to watch yourself signing and take notes. The relevant data must be in your brain or your hand somewhere, but you are hardly any better off when it comes to putting them into words than anyone else who happens to be watching. Presumably the relevant data are still stored somewhere in your brain. Perhaps they are just in a different part of it, or otherwise less accessible: but it seems likely that at least some of them are held in a form which just doesn’t translate into explicit terms. There may be a sequence of impulses recorded somewhere which, when sent down the nerves in your arm, results in a signature: but there need be no standard pattern in the sequence which symbolises ‘downstroke’ or anything else.

The fourth case is the most difficult of all. Most of the things we know, we never think about or use. We never asked ourselves whether zebras wore waistcoats until Dennett proposed the example, but as soon as we heard the question, we knew the answer. This vast stock of common-sense knowledge (Searle refers to it, or something very like it, as ‘the Background’) is crucial to the way we deal with real life and work out what people are talking about: it’s the reason human beings don’t generally get floored by the ‘frame problem’ – unanticipated implications of every action – the way robots do. It surely cannot be that all this kind of knowledge is saved explicitly somewhere in the brain, however vast its storage capacity. In fact, there are good arguments to suggest that the amount of information involved is strictly infinite: we know zebras are normally less than twenty feet tall, normally less than twenty-one feet tall, and so on.

That last argument suggests a better explanation – perhaps key pieces of information are stored in a central encyclopaedia, and the more recondite conclusions worked out as necessary. After all, if we know that zebras are less than twenty feet, simple arithmetic will tell us that they are less than fifty, without the need to store that conclusion separately. The trouble then is that there simply is no general method of working out relevant conclusions from other facts: formal logic certainly isn’t up to the job. I’ve discussed this further elsewhere , but it seems likely to me that part of the problem is that, as with the third case, the information is probably not recorded in the brain in any explicit form. To look for an old-fashioned algorithm is probably, therefore, to set off up a blind alley.

What about the two forms of self-knowledge we started with? It looks as if we are dealing with a loss of the kind of knowledge covered by example 1 above, while type 2 is retained. If so, the loss of memory might be less disabling than it seems. The patients in question would not be able to tell you their address, but perhaps they might still be able to walk to the correct house “without thinking about it”? They might not be able to tell you their own name, even – but perhaps they could sign it and then read it.