rosetta stoneMicrosoft recently announced the first public beta preview for Skype Translate, a service which will provide immediate translation during voice calls. For the time being only Spanish/English is working but we’re told that English/German and other languages are on the way. The approach used is complex. Deep Neural Networks apparently play a key role in the speech recognition. While the actual translation  ultimately relies on recognising bits of text which resemble those it already knows, the same basic principle applied in existing text translators such as Google Translate, it is also capable of recognising and removing ‘disfluencies’ –  um and ers, rephrasings, and so on, and apparently makes some use of syntactical models, so there is some highly sophisticated processing going on.  It seems to do a reasonable job, though as always with this kind of thing a degree of scepticism is appropriate.

Translating actual speech, with all its messy variability is of course an amazing achievement, much more difficult than dealing with text (which itself is no walk in the park); and it’s remarkable indeed that it can be done so well without the machine making any serious attempt to deal with the meaning of the words it translates. Perhaps that’s a bit too bald: the software does take account of context and as I said it removes some meaningless bits, so arguably it is not ignoring meaning totally. But full-blown intentionality is completely absent.

This fits into a recent pattern in which barriers to AI are falling to approaches which skirt or avoid consciousness as we normally understand it, and all the intractable problems that go with it.  It’s not exactly the triumph of brute force, but it does owe more to processing power and less to ingenuity than we might have expected. At some point if this continues, we’re going to have to take seriously the possibility of our having, in the not-all-that remote future, a machine which mimics human behaviour brilliantly without our ever having solved any of the philosophical problems. Such a robot might run on something like a revival of the frames or scripts of Marvin Minsky or Roger Schank, only this time with a depth and power behind it that would make the early attempts look like working with an abacus. The AI would, at its crudest, simply be recognising situations and looking up a good response, but it would have such a gigantic library of situations and it would be so subtle at customising the details that its behaviour would be indistinguishable from that of ordinary humans for all practical purposes. What would we say about such a robot (let’s call her Sophia, why not since anthropomorphism seems inevitable). I can see several options.

Option one. Sophia really is conscious, just like us. OK, we don’t really understand how we pulled it off, but it’s futile to argue about it when her performance provides everything we could possibly demand of consciousness and passes every test anyone can devise. We don’t argue that photographs are not depictions because they’re not executed in oil paint, so why would we argue that a consciousness created by other means is not the real thing? She achieved consciousness by a different route, and her brain doesn’t work like ours – but her mind does. In fact, it turns out we probably work more like her than we thought: all this talk of real intrinsic intentionality and magic meaningfulness turns out to be a systematic delusion; we’re really just running scripts ourselves!

Option two. Sophia is conscious, but not in the way we are. OK, the results are indistinguishable, but we just know that the methods are different, and so the process is not the same. birds and bats both fly, but they don’t do it the same way. Sophia probably deserves the same moral rights and duties as us, though we need to be careful about that; but she could very well be a philosophical zombie who has no subjective experience. On the other hand, her mental life might have subjective qualities of its own, very different to ours but incommunicable.

Option three. She’s not not conscious; we just know she isn’t, because we know how she works and we know that all her responses and behaviour come from simply picking canned sequences out of the cupboard. We’re deluding ourselves if we think otherwise. But she is the vivid image of a human being and an incredibly subtle and complex entity: she may not be that different from animals whose behaviour is largely instinctive. We cannot therefore simply treat her as a machine: she probably ought to have some kinds of rights: perhaps special robot rights. Since we can’t be absolutely certain that she does not experience real pain and other feelings in some form, and since she resembles us so much, it’s right to avoid cruelty both on the grounds of the precautionary principle and so as not to risk debasing our own moral instincts; if we got used to doling out bad treatment to robots who cried out with human voices, we might get used to doing it to flesh and blood people too.

Option four.  Sophia’s just an entertaining machine, not conscious at all; but that moral stuff is rubbish. It’s perfectly OK to treat her like a slave, to turn her off when we want, or put her through terrible ‘ordeals’ if it helps or amuses us. We know that inside her head the lights are off, no-one home: we might as well worry about dolls. You talk about debasing our moral instincts, but I don’t think treating puppets like people is a great way to go, morally. You surely wouldn’t switch trolleys to save even ten Sophias if it killed one human being: follow that out to its logical conclusion.

Option five. Sophia is a ghastly parody of human life and should be destroyed immediately. I’m not saying she’s actuated by demonic possession (although Satan is pretty resourceful), but she tempts us into diabolical errors about the unique nature of the human spirit.

No doubt there are other options; for me. at any rate, being obliged to choose one is a nightmare scenario. Merry Christmas!

honderich 3Ted Honderich’s latest work Actual Consciousness is a massive volume. He has always been partial to advancing his argument through a comprehensive review (and rejection) of every other opinion on the subject in question. Here, that approach produces a hefty book which in practice is about the whole field of philosophy of consciousness. There is a useful video here at IAI of Ted grumpily failing to summarise the whole theory in the allotted time and confessing with the same alarming frankness that characterised his autobiography to wanting to be as famous as Bach or Chomsky, and not thinking he was going to be. If you want to see the whole thing you’ll have to sign up (free); but they do have a number of good discussions of consciousness.

The theory Honderich is advancing is a further version of the externalism which we discussed a while ago; that for you to be conscious is in some sense for something to exist (or to be real, hence the ‘actual’ in Actual Consciousness). At first sight this thesis has always seemed opaque to the point of wilful obscurity, and the simplest readings seem to make it either vacuous (for you to be conscious is for a state of consciousness to exist) or just evidently wrong (for you to be conscious is for the object of your awareness to exist). He means – he must mean – something subtler than that, and a few more clues can only be welcome.

First though, we survey the alternatives. Honderich suggests (and few would disagree) that the study of consciousness has been vastly complicated by differing or inadequate definitions. This has led philosophers to talk past each other or work themselves into confusions. Above all, Honderich thinks virtually everyone has at some point fallen into circularity, smuggling into their definitions terms that already include consciousness in one form or another.

He sets out five leading ideas: these are not actually the five parts into which he would carve consciousness himself (he would analyse it into three: perceptual, cognitive and affective consciousness) but these are the ideas he feels we need to address. They are: qualia, ‘something it is like for a thing to be that thing’, subjectivity, intentionality, and phenomenality. More normally these days we divide the field in two initially, and at first glance four of Honderich’s views look like the same thing from different angles. When there is ‘something it is like’, that’s the phenomenal aspect of experience as had by a subject and characterised by the presence of qualia. But let’s not be hasty.

Having reviewed briefly what various people have said about qualia, Honderich notes that one thing seems clear; that it is always conceived of as distinct from, and hence accompanied by, another form of consciousness. Some people certainly assert that qualia are the real essence of consciousness or at any rate of the interesting part of it; but it does seem to be true that no-one proposes conscious states that include qualia and nothing else. That in itself doesn’t amount to circularity, though.

The next leading idea is something it is like to see red, or whatever. Nagel’s phrase is unhelpful but somehow powerfully persuasive. We all sort of know what it is getting at. Honderich notes that Nagel himself offered an improved version that leaves out the suggestion that a comparison is going on; to be conscious, in this version, is for there to be something that is how it is for you (to see red or whatever). What does this all really mean? Honderich suspects that it comes down to there being something it is like, or something that is how it is, for you to be conscious (of something red, eg), once again a case of circularity. I don’t really see it; it seems to me that Nagel offers an equation; consciousness is there being something it is like; Honderich pounces: Aha! But there being something it is like is being conscious! That just seems to be travelling back from the second term of the equation to the first, not showing that the the second term requires or contains the first. I’m simplifying rather a lot, so perhaps I’ve missed something. But so far as I can see while Honderich justly complains that the formula is uninformative, the only circularity is one he inserted himself.

Subjectivity for Honderich means the existence of a subject. The word, as he acknowledges, can often be used as more or less a synonym for one of the two senses already discussed: in fact I should say that that is the standard meaning. But it’s true that consciousness is tied up with the notion of an experiencing self or subject (and those who deny the existence of one are often sceptical about the other). Honderich suggests that it is implicit in the idea of a subject that the subject is conscious, and though we can raise quibbles over sleeping or anaesthetised subjects, he is surely on firmer ground in seeing circularity here. To define consciousness in terms of a subject is circular because to be a subject you have to be conscious. But nobody does that, or at least, no-one I can think of. It’s sort of accepted that you need to have your consciousness sorted out before you can have your conscious agent.

With intentionality we come on to something distinctly different; this is the quality of aboutness or directedness singled out by Brentano. Honderich bases his comments on Brentano’s account, which he quotes at full length. It’s only fair to note in passing that Brentano was not talking about consciousness; rather he asserted that intentionality was the mark of the mental; but there is obviously a connection and we might well try to argue that intentionality was constitutive of consciousness.

Honderich notes an intractable problem over the objects of intentionality; they don’t have to exist. We can think about imaginary or even absurd things just as easily as about real ones. But if we are not thinking about a real slipper when we think of Cinderella’s glass one, then surely we’re not really thinking about the actual one the dog is chewing in the corner, either; perhaps the real objects are just our ideas or images of the slipper, or whatever. If we don’t take that path, suggests Honderich, then this intentionality business is no great help; if we do suppose that thinking about things is thinking about a mental image, then we’re back with circularity because it would have to be a conscious image, wouldn’t it?

Would it? I’m not totally sure it would; wouldn’t the theory be that it becomes a conscious image when it’s an object of conscious thought, but not otherwise or in itself? Otherwise we seem to have a weird doubling up going on. But anyway, it’s too clear, in my opinion, that thinking about a thing is thinking about that thing, not thinking about an idea of it; we have to find some other way round the problem of non-existent objects of thought. So we’re left with the complaint that intentionality does not explain consciousness – and that’s true enough; it’s at least as much a part of the problem.

With phenomenality we’re back with a word that could be taken as meaning much the same as subjectivity, and referring to the same stuff as qualia or something it is like. Honderich draws on Ned Block’s much-cited distinction between access or a-consciousness and phenomenal or p-consciousness, and attacks David Chalmers for having said that qualia, subjectivity, phenomenality and so on, are essentially different ways of talking about the same thing. I’m with Chalmers; yes, the different terms embody different ways of approaching the problem, but there’s little doubt that if we could crack one, translating the solution into terms of the other approaches would be relatively trivial. Oddly, instead of recapitulating the claimed important distinctions, which woulod seem the natira;l thing to do at this point, Honderich seems to argue that if Chalmers thinks all these things are the same thing, they must in fact all be examples of something more fundamental, in which case why doesn’t Chalmers talk about the fundamental thing?

If there are the grammatical or subtle differences between the terms for the phenomena, the things do make up ‘approximately the same class of phenomena’. What class is that? To speak differently, what are these things examples of ? In fact didn’t we have to have some idea of that in order to bring the examples together in the first place? What brings the different things together? What is this general fact of consciousness? It has to exist, doesn’t it? Chalmers, I’d say, has credit for bringing the things together, but he might have asked about the general fact, mightn’t he?

This is strange; to assert that a number of terms refer to the same thing is not necessarily to assert that they are all in fact yet another thing. My best guess is that Honderich wants to manoeuvre Chalmers into seeming circularity again, but if so I don’t think it comes off.

Honderich goes on to an extended review of the territory and what others have said, but I propose to cut to the chase. Cutting to the chase, by the way, is something Honderich himself is very bad at, or rather pathologically averse to. He has a style all his own, characterised by a constant backing away from saying anything directly; he prefers to mention a few things one might say in relation to this matter, some indeed that others have at times suggested he himself might not always have been committed to denial of, that is to say these considerations might be ones – not by any means to characterise exhaustively, but nevertheless to bring forward as previously hinted – perhaps to be felt to be most significantly indicated or at any rate we might choose, not yet to think, but to entertain the possibility, of considering as such. Sometimes you really want to kick him.

Anyway, to get to the point: Honderich is an externalist; he thinks your perception of x is something that happens out there where x is real and physical, not in your head. There is an extension to this to take care of those cases where we think about things that are imaginary or otherwise non-physical; in such cases the same thing is going on, it’s just that the objects perceived are representations in our mind. In a sense this is externalism simply redirected to objects that happen to be internal. Of course, how anything comes to be a mental representation is itself a non-trivial issue.

Honderich says that for you to be conscious of something is for something to be real, to exist. This puzzling or vacuous-seeming formula is underpinned by the eyebrow-raising idea of subjective physicality. This is like a kind of Philosopher’s Stone; it means that what we perceive can be both actual and physical in a perfectly normal way, yet subjective in the way required by consciousness. How can we possibly eat our cake this way and yet still have it? It’s kind of axiomatic that the actual qualities of physical things don’t depend on the observer (yes, I know in modern physics that’s a can of worms, but not one we need to open here), while subjective qualities absolutely do; my subjective impressions may be quite different to yours.

How is this trick to be pulled off?

The general answer to the question of what is actual with your perceptual consciousness, putting aside that in which it may issue immediately, is a part, piece or stage of a subjective physical world of several dependencies, out there in space, and nothing else whatever. Your being conscious now is exactly and nothing more than this severally-dependent fact external to you of a room’s existing…

It looks at first sight as if this talk of worlds may be the answer. The room exists subjectively for you and also physically, but in a world of its own, or of your own, which would explain how it can be different from the subjective experience of others; they have different worlds. This would be alarming, however, because it suggests a proliferation of worlds whose relationships would be problematic and whose ontology profligate. We’re talking some serious and bizarre metaphysics, of which there is no sign elsewhere.

I don’t think that is at all what Honderich has in mind. Instead I think we need to remember that he doesn’t mean by subjectivity what everyone else means. He just means there is a subject. So his description of consciousness comes down to this; that there is a real object of consciousness, whether in the world or in the brain, which is unproblematically physical; and there is a subject, also physical, who is conscious of the thing.

Is that it? It seems sort of underwhelming. But I fear that is the essence of it.  Helpfully, Honderich provides a table of his proposed structure:

honderich table

Yes, that seems to me to confirm the suggested interpretation.

So it kind of looks as if Honderich has used a confusingly non-standard definition and ended up with a theoretical position which honestly sheds little light on the real issue; yet these were the very problems he criticised in earlier approaches. I can’t deny that I have greatly simplified here and it might be that I missed the key somewhere in one of those many chapters – but frankly I’m not going back to look again.

Locke with flowersThe problem of qualia is in itself a very old one, but it is expressed in new terms.  My impression is that the actual word ‘qualia’ only began to be widely used (as a hot new concept) in the 1970s.  The question of whether the colours you experience in your mind are the same as the ones I experience in mine, on the other hand, goes back a long way. I’m not aware of any ancient discussions, though I should not be at all surprised to hear that there is one in, say, Sextus Empiricus (if you know one please mention it): I think the first serious philosophical exposition of the issue is Locke’s in the Essay Concerning Human Understanding:

“Neither would it carry any imputation of falsehood to our simple ideas, if by the different structure of our organs, it were so ordered, that the same object should produce in several men’s minds different ideas at the same time; e.g. If the idea, that a violet produced in one man’s mind by his eyes, were the same that a marigold produces in another man’s, and vice versa. For since this could never be known: because one man’s mind could not pass into another man’s body, to perceive, what appearances were produced by those organs; neither the ideas hereby, nor the names, would be at all confounded, or any falsehood be in either. For all things, that had the texture of a violet, producing constantly the idea, which he called blue, and those that had the texture of a marigold, producing constantly the idea, which he as constantly called yellow, whatever those appearances were in his mind; he would be able as regularly to distinguish things for his use by those appearances, and understand, and signify those distinctions, marked by the names blue and yellow, as if the appearances, or ideas in his mind, received from those two flowers, were exactly the same, with the ideas in other men’s minds.”

Interestingly, Locke chose colours which are (near enough) opposites on the spectrum; this inverted spectrum form of the case has been highly popular in recent decades.  It’s remarkable that Locke put the problem in this sophisticated form; he managed to leap to a twentieth-century outlook from a standing start, in a way. It’s also surprising that he got in so early: he was, after all, writing less than twenty years after the idea of the spectrum was first put forward by Isaac Newton. It’s not surprising that Locke should know about the spectrum; he was an enthusiastic supporter of Newton’s ideas, and somewhat distressed by his own inability to follow them in the original. Newton, no courter of popularity, deliberately expressed his theories in terms that were hard for the layman, and scientifically speaking, that’s what Locke was. Alas, it seems the gap between science and philosophy was already apparent even before science had properly achieved a separate existence: Newton would still have called himself a natural philosopher, I think, not a scientist.

It’s hard to be completely sure that Locke did deliberately pick colours that were opposite on the spectrum – he doesn’t say so, or call attention to their opposition (there might even be some room for debate about whether  ‘blue’ and ‘yellow are really opposite) but it does seem at least that he felt that strongly contrasting colours provided  a good example, and in that respect at least he anticipated many future discussions. The reason so many modern  theorists like the idea is that they believe a switch of non-opposite colour qualia would be detectable, because the spectrum would no longer be coherent, while inverting the whole thing preserves all the relationships intact and so leaves the change undetectable. Myself, I think this argument is a mistake, inadvertently transferring to qualia the spectral structure which actually belongs to the objective counterparts of colour qualia. The qualia themselves have to be completely indistinguishable, so it doesn’t matter whether we replace yellow qualia with violet or orange ones, or for that matter, with the quale of the smell of violets.

Strangely enough though Locke was not really interested in the problem; on the contrary, he set it out only because he was seeking to dismiss it as an irrelevance. His aim, in context, was to argue that simple perceptions cannot be wrong, and the possibility of inconsistent colour judgements – one person seeing blue where another saw yellow – seemed to provide a potential counter-argument which he needed to eliminate. If one person sees red where another sees green, surely at least one of them must be wrong? Locke’s strategy was to admit that different people might have different ideas for the same percept (nowadays we would probably refer to these subjective ideas of percepts as qualia), but to argue that it doesn’t matter because they will always agree about which colour is, in fact yellow, so it can’t properly be said that their ideas are wrong. Locke, we can say, was implicitly arguing that qualia are not worth worrying about, even for philosophical purposes.

This ‘so what?’ line of thought is still perfectly tenable. We could argue that two people looking at the same rose will not only agree that it is red, but also concur that they are both experiencing red qualia; so the fact that inwardly their experiences might differ is literally of no significance – obviously of no practical significance, but arguably also metaphysically nugatory. I don’t know of anyone who espouses this disengaged kind of scepticism, though; more normally people who think qualia don’t matter go on to argue that they don’t exist, either. Perhaps the importance we attach to the issue is a sign of how our attitudes to consciousness have changed: it was itself a matter of no great importance or interest to Locke.  I believe consciousness acquired new importance with the advent of serious computers, when it became necessary to find some quality  with which we could differentiate ourselves from machines. Subjective experience fit the bill nicely.

 

Banca RuritaniaPersonhood Week, at National Geographic is a nice set of short pieces briefly touring the issues around the crucial but controversial issue of what constitutes a person.

You won’t be too surprised to hear that in my view personhood is really all about consciousness. The core concept for me is that a person is a source of intentions – intentions in the ordinary everyday sense rather than in the fancy philosophical sense of intentionality (though that too).  A person is an actual or potential agent, an entity that seeks to bring about deliberate outcomes. There seems to be a bit of a spectrum here; at the lower level it looks as if some animals have thoughtful and intentional behaviour of the kind that would qualify them for a kind of entry-level personhood. At its most explicit, personhood implies the ability to articulate complicated contracts and undertake sophisticated responsibilities: this is near enough the legal conception. The law, of course, extends the idea of a person beyond mere human beings, allowing a form of personhood to corporate entities, which are able to make binding agreements, own property, and even suffer criminal liability. Legal persons of this kind are obviously not ‘real’ ones in some sense, and I think the distinction corresponds with the philosophical distinction between original (or intrinsic, if we’re bold) and derived intentionality. The latter distinction comes into play mainly when dealing with meaning. Books and pictures are about things, they have meanings and therefore intentionality, but their meaningfulness is derived: it comes only from the intentions of the people who interpret them, whether their creators or their ‘audience’.  My thoughts, by contrast, really just mean things, all on their own and however anyone interprets them: their intentionality is original or intrinsic.

So, at least, most people would say (though others would energetically contest that description). In a similar way my personhood is real or intrinsic: I just am a person; whereas the First Central Bank of Ruritania has legal personhood only because we have all agreed to treat it that way. Nevertheless, the personhood of the Ruritanian Bank is real (hypothetically, anyway; I know Ruritania does not exist – work with me on this), unlike that of, say, the car Basil Fawlty thrashed with a stick, which is merely imaginary and not legally enforceable.

Some, I said, would contest that picture: they might argue that ;a source of intentions makes no sense because ‘people’ are not really sources of anything; that we are all part of the universal causal matrix and nothing comes of nothing. Really, they would say, our own intentions are just the same as those of Banca Prima Centrale Ruritaniae; it’s just that ours are more complex and reflexive – but the fact that we’re deeming ourselves to be people doesn’t make it any the less a matter of deeming.  I don’t think that’s quite right – just because intentions don’t feature in physics doesn’t mean they aren’t rational and definable entities – but in any case it surely isn’t a hit against my definition of personhood; it just means there aren’t really any people.

Wait a minute, though. Suppose Mr X suffers a terrible brain injury which leaves him incapable of forming any intentions (whether this is actually possible is an interesting question: there are some examples of people with problems that seem like this; but let’s just help ourselves to the hypothesis for the time being). He is otherwise fine: he does what he’s told and if supervised can lead a relatively normal-seeming life. He retains all his memories, he can feel normal sensations, he can report what he’s experienced, he just never plans or wants anything. Would such a man no longer be a person?

I think we are reluctant to say so because we feel that, contrary to what I suggested above, agency isn’t really necessary, only conscious experience. We might have to say that Mr X loses his legal personhood in some senses; we might no longer hold him responsible or accept his signature as binding, rather in the way that we would do for a young child: but he would surely retain the right to be treated decently, and to kill or injure him would be the same crime as if committed against anyone else.  Are we tempted to say that there are really two grades of personhood that happen to coincide in human beings,  a kind of ‘Easy Problem’ agent personhood on the one hand and a ‘Hard Problem’ patient personhood?  I’m tempted, but the consequences look severely unattractive. Two different criteria for personhood would imply that I’m a person in two different ways simultaneously, but if personhood is anything, it ought to be single, shouldn’t it? Intuitively and introspectively it seems that way. I’d feel a lot happier if I could convince myself that the two criteria cannot be separated, that Mr X is not really possible.

What about Robot X? Robot X has no intentions of his own and he also has no feelings. He can take in data, but his sensory system is pretty simple and we can be pretty sure that we haven’t accidentally created a qualia-experiencing machine. He has no desires of his own, not even a wish to serve, or avoid harming human beings, or anything like that. Left to himself he remains stationary indefinitely, but given instructions he does what he’s told: and if spoken to, he passes the Turing Test with flying colours. In fact, if we ask him to sit down and talk to us, he is more than capable of debating his own personhood, showing intelligence, insight, and understanding at approximately human levels. Is he a person? Would we hesitate over switching him off or sending him to the junk yard?

Perhaps I’m cheating. Robot X can talk to us intelligently, which implies that he can deal with meanings. If he can deal with meanings, he must have intentionality, and if he has that perhaps he must, contrary to what I said, be able to form intentions after all – so perhaps the conditions I stipulated aren’t possible after all? And then, how does he generate intentions, as a matter of fact? I don’t know, but on one theory intentionality is rooted in desires or biological drives. The experience of hunger is just primally about food, and from that kind of primitive aboutness all the fancier kinds are built up. Notice that it’s the experience of hunger, so arguably if you had no feelings you couldn’t get started on intentionality either! If all that is right, neither Robot X nor Mr X is really as feasible as they might seem: but it still seems a bit worrying to me.

waveAn article in the Chronicle of Higher Education (via the always-excellent Mind Hacks) argues cogently that as a new torrent of data about the brain looms, we need to ensure that it is balanced by a corresponding development in theory. That must surely be right: but I wonder whether the torrent of new information is going to bring about another change in paradigm, as the advent of computers in the twentieth century surely did?

We have mentioned before the two giant projects which aim to map and even simulate the neural structure of the brain, one in America, one in Europe. Other projects elsewhere and steady advances in technology seem to indicate that the progress of empirical neuroscience, already impressive, is likely to accelerate massively in coming years.

The paper points out that at present, in spite of enormous advances, we still know relatively little about the varied types of neurons and what they do; and much of what we think we do know is vague, tentative, and possibly misleading. Soon, however, ‘there will be exabytes (billions of gigabytes) of data, detailing what vast numbers of neurons do, in real time’.

The authors rightly suggest that data alone is no good without theoretical insights: they fear that at present there may be structural issues which lead to pure experimental work being funded while theory, in spite of being cheaper, is neglected or has to tag along as best it can. The study of the mind is an exceptionally interdisciplinary business, and they justifiably say research needs to welcome ‘mathematicians, engineers, computer scientists, cognitive psychologists, and anthropologists into the fold’. No philosophers in the list, I notice, although the authors quote Ned Block approvingly. (Certainly no novelists, although if we’re studying consciousness the greatest corpus of exploratory material is arguably in literature rather than science. Perhaps that’s asking a bit too much at this stage: grants are not going to be given to allow neurologists to read Henry as well as William James, amusing though that might be.)

I wonder if we’re about to see a big sea change; a Third Wave? There’s no doubt in my mind that the arrival of practical computers in the twentieth century had a vast intellectual impact. Until then philosophy of mind had not paid all that much attention to consciousness. Free Will, of course, had been debated for centuries, and personal identity was also a regular topic; but consciousness per se and qualia in particular did not seem to be that important until – I think – the seventies or eighties when a wide range of people began to have actual experience of computers. Locke was perhaps the first person to set out a version of the inverted spectrum argument, in which the blue in your mind is the same as the yellow in mine, and vice versa; but far from its being a key issue he mentions it only to dismiss it: we all call the same real world colours by the same names, so it’s a matter of no importance. Qualia? Of no philosophical interest.

I think the thing is that until computers actually appeared it was easy to assume, like Leibniz, that they could only be like mills: turning wheels, moving parts, nothing there that resembles a mind. When people could actually see a computer producing its results, they realised that there was actually the same kind of incomprehensible spookiness about it as there was in the case of human thought; maybe not exactly the same mystery, but a pseudo-magic quality far above the readily-comprehensible functioning of a mill. As a result, human thought no longer looked so unique and we needed something to stand in as the criterion which separated machines from people. Our concept of consciousness got reshaped and promoted to play that role, and a Second Wave of thought about the mind rolled in, making qualia and anything else that seemed uniquely human of special concern.

That wave included another change, though, more subtle but very important. In the past, the answer to questions about the mind had clearly been a matter of philosophy, or psychology; at any rate an academic issue. We were looking for a heavy tome containing a theory. Once computers came along, it turned out that we might be looking for a robot instead. The issues became a matter of technology, not pure theory. The unexpected result was that new issues revealed themselves and came to the fore. The descriptive theories of the past were all very well, but now we realised that if we wanted to make a conscious machine, they didn’t offer much help. A good example appears in Dan Dennett’s paper on cognitive wheels, which sets out a version of the Frame Problem. Dennett describes the problem, and then points out that although it is a problem for robots, it’s just as mysterious for human cognition; actually a deep problem about the human mind which had never been discussed; it’s just that until we tried to build robots we never noticed it. Most philosophical theories still have this quality, I’m afraid, even Dennett’s: OK, so I’m here with my soldering iron or my keyboard: how do I make a machine that adopts the intentional stance? No clue.

For the last sixty years or so I should say that the project of artificial intelligence has set the agenda and provided new illumination in this kind of way. Now it may be that neurology is at last about to inherit the throne.  If so, what new transformations can we expect? First I would think that the old-fashioned computational robots are likely to fall back further and that simulations, probably using neural network approaches, are likely to come to the fore. Grand Union theories, which provide coherent accounts from genetics through neurology to behaviour, are going to become more common, and build a bridgehead for evolutionary theories to make more of an impact on ideas about consciousness.  However, a lot of things we thought we knew about neurons are going to turn out to be wrong, and there will be new things we never spotted that will change the way we think about the brain. I would place a small bet that the idea of the connectome will look dusty and irrelevant within a few years, and that it will turn out that neurons don’t work quite the way we thought.

Above all though, the tide will surely turn for consciousness. Since about 1950 the game has been about showing what, if anything, was different about human beings; why they were not just machines (or why they were), and what was unique about human consciousness. In the coming decade I think it will all be about how consciousness is really the same as many other mental processes. Consciousness may begin to seem less important, or at any rate it may increasingly be seen as on a continuuum with the brain activity of other animals; really just a special case of the perfectly normal faculty of…  Well, I don’t actually know what, but I look forward to finding out.

gameScott has a nice discussion of our post-intentional future (or really our non-intentional present, if you like) here on Scientia Salon. He quotes Fodor saying that the loss of ‘common-sense intentional psychology’ would be the greatest intellectual catastrophe ever: hard to disagree, yet that seems to be just what faces us if we fully embrace materialism about the brain and its consequences. Scott, of course, has been exploring this territory for some time, both with his Blind Brain Theory  and his unforgettable novel Neuropath; a tough read, not because the writing is bad but  because it’s all too vividly good.

Why do we suppose that human beings uniquely stand outside the basic account of physics, with real agency, free will, intentions and all the rest of it? Surely we just know that we do have intentions? We can be wrong about what’s in the world; that table may be an illusion; but our intentions are directly present to our minds in a way that means we can’t be wrong about them – aren’t they?

That kind of privileged access is what Scott questions. Cast your mind back, he says, to the days before philosophy of mind clouded your horizons, when we all lived the unexamined life. Back to Square One, as it were: did your ignorance of your own mental processes trouble you then? No: there was no obvious gaping hole in our mental lives;  we’re not bothered by things we’re not aware of. Alas,  we may think we’ve got a more sophisticated grasp of our cognitive life these days, but in fact the same problem remains. There’s still no good reason to think we enjoy an epistemic privilege in respect of our own version of our minds.

Of course, our understanding of intentions works in practice. All that really gets us, though, is that it seems to be a viable heuristic. We don’t actually have the underlying causal account we need to justify it; all we do is apply our intentional cognition to intentional cognition…

it can never tell us what cognition is simply because solving that problem requires the very information intentional cognition has evolved to do without.

Maybe then, we should turn aside from philosophy and hope that cognitive science will restore to us what physical science seems to take away? Alas, it turns out that according to cognitive science our idea of ourselves is badly out of kilter, the product of a mixed-up bunch of confabulation, misremembering, and chronically limited awareness. We don’t make decisions, we observe them, our reasons are not the ones we recognise, and our awareness of our own mental processes is narrow and error-filled.

That last part about the testimony of science is hard to disagree with; my experience has been that the more one reads about recent research the worse our self-knowledge seems to get.

If it’s really that bad, what would a post-intentional world look like? Well, probably like nothing really, because without our intentional thought we’d presumably have an outlook like that of dogs, and dogs don’t have any view of the mind. Thinking like dogs, of course, has a long and respectable philosophical pedigree going back to the original Cynics, whose name implies a d0g-level outlook. Diogenes himself did his best to lead a doggish, pre-intentional life,  living rough, splendidly telling Alexander the Great to fuck off and less splendidly, masturbating in public (‘Hey,  I wish I could cure hunger too just by rubbing my stomach’). Let’s hope that’s not where we’re heading.

However, that does sort of indicate the first point we might offer. Even Diogenes couldn’t really live like a dog: he couldn’t resist the chance to make Plato look a fool, or hold back when a good zinger came to mind. We don’t really cling to our intentional thoughts because we believe ourselves to have privileged access (though we may well believe that); we cling to them because believing we own those thoughts in some sense is just the precondition of addressing the issue at all, or perhaps even of having any articulate thoughts about anything. How could we stop? Some kind of spontaneous self-induced dissociative syndrome? Intensive meditation? There isn’t really any option but to go on thinking of our selves and our thoughts in more or less the way we do, even if we conclude that we have no real warrant for doing so.

Secondly, we might suggest that although our thoughts about our own cognition are not veridical, that doesn’t mean our thoughts or our cognition don’t exist. What they say about the contents of our mind is wrong perhaps, but what they imply about there being contents (inscrutable as they may be) can still be right. We don’t have to be able to think correctly about what we’re thinking in order to think. False ideas about our thoughts are still embodied in thoughts of some kind.

Is ‘Keep Calm and Carry On’ the best we can do?

 

 

meetingPetros Gelepithis has A Novel View of Consciousness in the International Journal of Machine Consciousness (alas, I can’t find a freely accessible version). Computers, as such, can’t be conscious, he thinks, but robots can; however, proper robot consciousness will necessarily be very unlike human consciousness in a way that implies some barriers to understanding.

Gelepithis draws on the theory of mind he developed in earlier papers, his theory of noèmona species. (I believe he uses the word noèmona mainly to avoid the varied and potentially confusing implications that attach to mind-related vocabulary in English.) It’s not really possible to do justice to the theory here, but it is briefly described in the following set of definitions, an edited version of the ones Gelepithis gives in the paper.

Definition 1. For a human H, a neural formation N is a structure of interacting sub-cellular components (synapses, glial structures, etc) across nerve cells able to influence the survival or reproduction of H.

Definition 2. For a human, H, a neural formation is meaningful (symbol Nm), if and only if it is an N that influences the attention of that H.

Definition 3. The meaning of a novel stimulus in context (Sc), for the human H at time t, is whatever Nm is created by the interaction of Sc and H.

Definition 4. The meaning of a previously encountered Sc, for H is the prevailed Np of Np

Definition 5. H is conscious of an external Sc if and only if, there are Nm structures that correspond to Sc and these structures are activated by H’s attention at that time.

Definition 6. H is conscious of an internal Sc if and only if the Nm structures identified with the internal Sc are activated by H’s attention at that time.

Definition 7. H is reflectively conscious of an internal Sc if and only if the Nm structures identified with the internal Sc are activated by H’s attention and they have already been modified by H’s thinking processes activated by primary consciousness at least once.

For Gelepithis consciousness is not an abstraction, of the kind that can be handled satisfactorily by formal and computational systems. Instead it is rooted in biology in a way that very broadly recalls Ruth Millikan’s views. It’s about attention and how it is directed, but meaning comes out of the experience and recollection of events related to evolutionary survival.

For him this implies a strong distinction between four different kinds of consciousness; animal consciousness, human consciousness, machine consciousness and robot consciousness. For machines, running a formal system, the primitives and the meanings are simply inserted by the human designer; with robots it may be different. Through, as I take it, living a simple robot life they may, if suitably endowed, gradually develop their own primitives and meanings and so attain their own form of consciousness. But there’s a snag…

Robots may be able to develop their own robot primitives and subsequently develop robot understanding. But no robot can ever understand human meanings; they can only interact successfully with humans on the basis of processing whatever human-based primitives and other notions were given…

Different robot experience gives rise to a different form of consciousness. They may also develop free will. Human beings act freely when their Acquired Belief and Knowledge (ABK) over-rides environmental and inherited influences in determining their behaviour; robots can do the same if they acquire an Own Robot Cognitive Architecture, the relevant counterpart. However, again…

A future possible conscious robotic species will not be able to communicate, except on exclusively formal bases, with the then Homo species.

‘then Homo’ because Gelepithis thinks it’s possible that human predecessors to Homo Sapiens would also have had distinct forms of consciousness (and presumably would have suffered similar communication issues).

Now we all have slightly different experiences and heritage, so Gelepithis’ views might imply that each of our consciousnesses is different. I suppose he believes that intra-species commonality is sufficient to make those differences relatively unimportant, but there should still be some small variation, which is an intriguing thought.

As an empirical matter, we actually manage to communicate rather well with some other species. Dogs don’t have our special language abilities and they don’t share our lineage or experiences to any great degree; yet very good practical understandings are often in place. Perhaps it would be worse with robots, who would not be products of evolution, would not eat or reproduce, and so on. Yet it seems strange to think that as a result their actual consciousness would be radically different?

Gelepithis’ system is based on attention, and robots would surely have a version of that; robot bodies would no doubt be very different from human ones, but surely the basics of proprioception, locomotion, manipulation and motivation would have to have some commonality?

I’m inclined to think we need to draw a further distinction here between the form and content of consciousness. It’s likely that robot consciousness would function differently from ours in certain ways: it might run faster, it might have access to superior memory, it might, who knows, be multi-threaded. Those would all be significant differences which might well impede communication. The robot’s basic drives might be very different from ours: uninterested in food, sex, and possibly even in survival, it might speak lyrically of the joys of electricity which must remain ever hidden from human beings. However, the basic contents of its mind would surely be of the same kind as the contents of our consciousness (hallo, yes, no, gimme, come here, go away) and expressible in the same languages?

TegmarkEarlier this year Tononi’s Integrated Information Theory (IIT) gained a prestigious supporter in Max Tegmark, professor of Physics at MIT. The boost for the theory came not just from Tegmark’s prestige, however; there was also a suggestion that the IIT dovetailed neatly with some deep problems of physics, providing a neat possible solution and the kind of bridge between neuroscience, physics and consciousness that we could hardly have dared to hope for.

Tegmark’s paper presents the idea rather strangely, suggesting that consciousness might be another state of matter like the states of being a gas, a liquid, or solid.  That surely can’t be true in any simple literal sense because those particular states are normally considered to be mutually exclusive: becoming a gas means ceasing to be a liquid. If consciousness were another member of that exclusive set it would mean that becoming conscious involved ceasing to be solid (or liquid, or gas), which is strange indeed. Moreover Tegmark goes on to name the new state ‘perceptronium’ as if it were a new element. He clearly means something slightly different, although the misleading claim perhaps garners him sensational headlines which wouldn’t be available if he were merely saying that consciousness arose from certain kinds of subtle informational organisation, which is closer to what he really means.

A better analogy might be the many different forms carbon can take according to the arrangement of its atoms: graphite, diamond, charcoal, graphene, and so on; it can have quite different physical properties without ceasing to be carbon. Tegmark is drawing on the idea of computronium proposed by Toffoli and Margolus. Computronium is a hypothetical substance whose atoms are arranged in such a way that it consists of many tiny modules capable of performing computations.  There is, I think, a bit of a hierarchy going on here: we start by thinking about the ability of substances to contain information; the ability of a particular atomic matrix to encode binary information is a relatively rigorous and unproblematic idea in information theory. Computronium is a big step up from that: we’re no longer thinking about a material’s ability to encode binary digits, but the far more complex functional property of adequately instantiating a universal Turing machine. There are an awful lot of problems packed into that ‘adequately’.

The leap from information to computation is as nothing, however, compared to the leap apparently required to go from computronium to perceptronium. Perceptronium embodies the property of consciousness, which may not be computational at all and of which there is no agreed definition. To say that raises a few problems is rather an understatement.

Aha! But this is surely where the IIT comes in. If Tononi is right, then there is in fact a hard-edged definition of consciousness available: it’s simply integrated information, and we can even say that the quantity required is Phi. We can detect it and measure it and if we do, perceptronium becomes mathematically tractable and clearly defined. I suppose if we were curmudgeons we might say that this is actually a hit against the IIT: if it makes something as absurd as perceptronium a possibility, there must be something pretty wrong with it. We’re surely not that curmudgeonly, but there is something oddly non-dynamic here. We think of consciousness, surely, as a process, a  function: but it seems we might integrate quite a lot of information and simply have it sit there as perceptronium in crystalline stillness; the theory says it would be conscious, but it wouldn’t do anything.  We could get round that by embracing the possibility of static conscious states, like one frame out of the movie film of experience; but Tegmark, if I understand him right, adds another requirement for consciousness: autonomy, which requires both dynamics and independence; so there has to be active information processing, and it has to be isolated from outside influence, much the way we typically think of computation.

The really exciting part, however,  is the potential linkage with deep cosmological problems – in particular the quantum factorisation problem. This is way beyond my understanding, and the pages of equations Tegmark offers are no help, but the gist appears to be that  quantum mechanics offers us a range of possible universes.  If we want to get ‘physics from scratch’, all we have to work with is, in Tegmark’s words,

two Hermitian matrices, the density matrix p encoding the state of our world and the Hamiltonian H determining its time-evolution…

Please don’t ask me to explain; the point is that the three things don’t pin down a single universe; there are an infinite number of acceptable solutions to the equations. If we want to know why we’ve got the universe we have – and in particular why we’ve got classical physics, more or less, and a world with an object hierarchy – we need something more. Very briefly, I take Tegmark’s suggestion to be that consciousness, with its property of autonomy, tends naturally to pick out versions of the universe in which there are similarly integrated and independent entities – in other words the kind of object-hierarchical world we do in fact see around us. To put it another way and rather baldly, the universe looks like this because it’s the only kind of universe which is compatible with the existence of conscious entities capable of perceiving it.

That’s some pretty neat footwork, although frankly I have to let Tegmark take the steering wheel through the physics and in at least one place I felt a little nervous about his driving. It’s not a key point, but consider this passage:

Indeed, Penrose and others have speculated that gravity is crucial for a proper understanding of quantum mechanics even on small scales relevant to brains and laboratory experiments, and that it causes non-unitary wavefunction collapse. Yet the Occam’s razor approach is clearly the commonly held view that neither relativistic, gravitational nor non-unitary effects are central to understanding consciousness or how conscious observers perceive their immediate surroundings: astronauts appear to still perceive themselves in a semi-classical 3D space even when they are effectively in a zero-gravity environment, seemingly independently of relativistic effects, Planck-scale spacetime fluctuations, black hole evaporation, cosmic expansion of astronomically distant regions, etc

Yeah… no. It’s not really possible that a professor of physics at MIT thinks that astronauts float around their capsules because the force of gravity is literally absent, is it? That kind of  ‘zero g’ is just an effect of being in orbit. Penrose definitely wasn’t talking about the gravitational effects of the Earth, by the way; he explicitly suggests an imaginary location at the centre of the Earth so that they can be ruled out. But I must surely be misunderstanding.

So far as consciousness is concerned, the appeal of Tegmark’s views will naturally be tied to whether one finds the IIT attractive, though they surely add a bit of weight to that idea. So far as quantum factorisation is concerned, I think he could have his result without the IIT if he wanted: although the IIT makes it particularly neat, it’s more the concept of autonomy he relies on, and that would very likely still be available even if our view of consciousness were ultimately somewhat different. The linkage with cosmological metaphysics is certainly appealing, essentially a sensible version of the Anthropic Principle which Stephen Hawking for one has been prepared to invoke in a much less attractive form

grazianoYes: I feel pretty sure that anyone reading this is indeed conscious. However, the NYT recently ran a short piece from Michael S. A. Graziano which apparently questioned it. A fuller account of his thinking is in this paper from 2011; the same ideas were developed at greater length in his book Consciousness and the Social Brain

I think the startling headline on the NYT piece misrepresents Graziano somewhat. The core of his theory is that awareness is in some sense a delusion, the reality of which is simple attention. We have ways of recognising the attention of other organisms, and what it is fixed on (the practical value of that skill in environments where human beings may be either hunters or hunted is obvious): awareness is just our garbled version of attention. he offers the analogy of colour. The reality out there is different wavelengths of light: colour, our version of that, is a slightly messed-up, neatened, artificial version which is nevertheless very vivid to us in spite of being artificial to a remarkably large extent.

I don’t think Graziano is even denying that awareness exists, in some sense: as a phenomenon of some kind it surely does: what he means is more that it isn’t veridical: what it tells us about itself, and what it tells us about attention, isn’t really true. As he acknowledges in the paper, there are labelling issues here, and I believe it would be possible to agree with the substance of what he says while recasting it in terms that look superficially much more conventional.

Another labelling issue may lurk around the concept of attention. On some accounts, it actually presupposes consciousness: to direct one’s attention towards something is precisely to bring it to the centre of your consciousness. That clearly isn’t what Graziano means: he has in mind a much more basic meaning. Attention for him is something simple like having your sensory organs locked on to a particular target. This needs to be clear and unambiguous, because otherwise we can immediately see potential problems over having to concede that cameras or other simple machines are capable of attention; but I’m happy to concede that we could probably put together some kind of criterion, perhaps neurological, that would fit the bill well enough and give Graziano the unproblematic materialist conception of attention that he needs.

All that looks reasonably OK as applied to other people, but Graziano wants the same system to supply our own mistaken impression of awareness. Just as we track the attention of others with the false surrogate of awareness, we pick up our own attentive states and make the same kind of mistake. This seems odd: when I sense my own awareness of something, it doesn’t feel like a deduction I’ve made from objective evidence about my own behaviour: I just sense it.  I think Graziano actually wants it to be like that for other people too. He isn’t talking about rational, Sherlock Holmes style reasoning about the awareness of other people, he has in mind something like a deep, old, lizard-brain kind of thing; like the sense of somebody there that makes the hairs rise on the back of the neck  and your eyes quickly saccade towards the presumed person.

That is quite a useful insight, because what Graziano is concerned to deny is the reality of subjective experience, of qualia, in a word. To do so he needs to be able to explain why awareness seems so special when the reality is nothing more than information processing. I think this remains a weak spot in the theory, but the idea that it comes from a very basic system whose whole function is to generate a feeling of ‘something there’ helps quite a bit, and is at least moderately compatible with my own intuitions and introspections.What Graziano really relies on is the suggestion that awareness is a second-order matter: it’s a cognitive state about other cognitive states, something we attribute to ourselves and not, as it seems to be, directly about the real world. It just happens to be a somewhat mistaken cognitive state.

That still leaves us in some difficulty over the difference between me and other people. If my sense of my own awareness is generated in exactly the same way as my sense of the awareness of others, it ought to seem equally distant  – but it doesn’t, it seems markedly more present and far less deniable.

More fundamentally, I still don’t really see why my attention should be misperceived. In the case of colours, the misrepresentation of reality comes from two sources, I think. One is the inadequacy of our eyes; our brain has to make do with very limited data on colour (and on distance and other factors) and so has to do things like hypothesising yellow light where it should be recognising both red and green, for example. Second, the brain wants to make it simple for us and so tries desperately to ensure that the same objects always look the same colour, although the wavelengths being received actually vary according to conditions. I find it hard to see what comparable difficulties affect our perception of attention. Why doesn’t it just seem like attention? Graziano’s view of it as a second-order matter explains how it can be wrong about attention, but not really why.

So I think the theory is less radical than it seems, and doesn’t quite nail the matter on some important points: but it does make certain kinds of sense and at the very least helps keep us roused from our dogmatic slumbers. Here’s a wild thought inspired (but certainly not endorsed) by Graziano. Suppose our sense of qualia really does come from a kind of primitive attention-detecting module. It detects our own attention and supplies that qualic feel, but since it also (in fact primarily) detects other people’s attention, should it not also provide a bit of a qualic feel for other people too? Normally when we think of our beliefs about other people, we remain in the explicit, higher realms of cognition: but what if we stay at a sort of visceral level, what if we stick with that hair-on-the back-of the-neck sensation? Could it be that now and then we get a whiff of other people’s qualia? Surely too heterodox an idea to contemplate…

gapWe’re often told that when facing philosophical problems, we should try to ‘carve them at the joints’. The biggest joint on offer in the case of consciousness has seemed to be the ‘explanatory gap’ between the physical activity of neurons and the subjective experience of consciousness. Now, in the latest JCS, Reggia, Monner, and Sylvester suggest that there is another gap, and one where our attention should rightly be focussed.

They suggest that while the simulation of certain cognitive processes has proceeded quite well, the project of actually instantiating consciousness computationally has essentially got nowhere.  That project, as they say, is affected by a variety of problems about defining and recognising success. But the real problem lies in an unrecognised issue: the computational explanatory gap. Whereas the explanatory gap we’re used to is between mind and brain, the computational gap is between high-level computational algorithms and low-level neuronal activity. At the high level, working top-down, we’ve done relatively well in elucidating how certain kinds of problem-solving, goal-directed kinds of computation work, and been able to simulate them relatively effectively.  At the neuronal, bottom-up level we’ve been able to explain certain kinds of pattern-recognition and routine learning systems. The two different kinds of processing have complementary strengths and weaknesses, but at the moment we have no clear idea of how one is built out of the other. This is the computational explanatory gap.

In philosophy, the authors plausibly claim, this important gap has been overlooked because in philosophical terms these are all ‘easy problem’ matters, and so tend to be dismissed as essentially similar matters of detail. In psychology, by contrast, the gap is salient but not clearly recognised as such: the lower-level processes correspond well to those identified as sub-conscious, while the higher-level ones match up with the reportable processes generally identified as conscious.

If Reggia, Monner and Sylvester are right, the well-established quest for the neural correlates of consciousness has been all very well, but what we really need is to bridge the gap by finding the computational correlates of consciousness. As a project, bridging this gap looks relatively promising, because it is all computational. We do not need to address any spooky phenomenology, we do not need to wonder how to express ‘what it is like’, or deal with anything ineffable; we just need to find the read-across between neural networking and the high-level algorithms which we can sort of see in operation. That may not be easy, but compared to the Hard Problem it sounds quite tractable. If solved, it will deliver a coherent account right from neural activity through to high-level decision making.

Of course, that leaves us with the Hard Problem unsolved, but the authors are optimistic that success might still banish the problem. They draw an analogy with artificial life: once it seemed obvious that there was a mysterious quality of being alive, and it was unclear how simple chemistry and physics could ever account for it. That problem of life has never been solved in terms, but as our understanding of the immensely subtle chemistry of living things has improved, it has gradually come to seem les and less obvious that it is really a problem. In a similar way the authors hope that if the computational explanatory gap can be bridged, so that we gradually develop a full account of cognitive processes from the ground-level firing of neurons up to high-level conscious problem-solving, the Hard Problem will gradually cease to seem like something we need to worry about.

That is optimistic, but not unreasonably so, and I think the new perspective offered is a very interesting and plausible one. I’m convinced that the gap exists and that it needs to be bridged: but I’m less sure that it can easily be done.  It might be that Reggia, Monner, and Sylvester are affected in a different way by the same kind of outlook they criticise in philosophers: these are all computational problems, so they’re all tractable. I’m not sure how we can best address the gap, and I suspect it’s there not just because people have failed to recognise it, but because it is also genuinely difficult to deal with.

For one thing, the authors tend to assume the problem is computational. It’s not clear that computation is of the essence here. The low-level processes at a neuronal level don’t look to be based on running any algorithm – that’s part of the nature of the gap. High-level processes may be capable of simulation algorithmically, but that doesn’t mean that’s the way the brain actually does it. Take the example of catching a ball – how do we get to the right place to intercept a ball flying through the air?  One way to do this would be some complex calculations about perspective and vectors: the brain could abstract the data, do the sums, and send back the instructions that result. We could simulate that process in a computer quite well. But we know – I think – that that isn’t the way it’s actually done: the brain uses a simpler and quicker process which never involves abstract calculation, but is based on straight matching of two inputs; a process which incidentally corresponds to a sub-optimal algorithm, but one that is good enough in practice. We just run forward if the elevation of the ball is reducing and back if it’s increasing. Fielders are incapable of predicting where a ball is going, but they can run towards the spot in such a way as to be there when the ball arrives.  It might be that all the ‘higher-level’ processes are like this, and that an attempt to match up with ideally-modelled algorithms is therefore categorically off-track.

Even if those doubts are right, however, it doesn’t mean that the proposed re-framing of the investigation is wrong or unhelpful, and in fact I’m inclined to think it’s a very useful new perspective.