Gofai’s not dead

I was brought up short by this bold assertion recently:

 … since last two decades, research towards making machines conscious has gained great momentum

 Surely not? My perception of the state of play is quite different, at any rate; it seems to me that the optimism and energy which was evident in the quest for machine consciousness twenty or perhaps thirty years ago has gradually ebbed. Not that anyone has really changed their minds on the ultimate feasibility of a conscious artefact, but expectations about when it might be achieved have generally been slipped back further into the future. Effort has been redirected towards related but more tractable problems, ones that typically move us on in useful ways but do not attempt to crack the big challenge in a single bite. You could say that the grand project was in a state of decline, though I think it would be kinder and perhaps more accurate to say it has moved into a period of greater maturity.

That optimistic aside was from Starzyk and Prasad, at the start of their paper A Computational Model of Machine Consciousness in the International Journal of Machine Consciousness. It’s not really fair to describe what they’re offering as GOFAI (Good Old-Fashioned Artificial Intelligence) in the original derogatory sense of the term, but I thought their paper had a definite whiff of the heady optimism of those earlier days.

Notwithstanding that optimism their view is that there are still some bits of foundational work to be done…

there is a lack of a physical description of consciousness that could become a foundation for such models.

…and of course they propose to fill the gap.

They start off with a tip of the hat to five axioms proposed by Aleksander and Dunmall back in 2003 which suggest essential functional properties of a conscious system:  embodiment and situatedness, episodic memory, attention, goals and motivation, and emotions. These are said to be necessary but not sufficient conditions for consciousness. If we were in an argumentative mood, I think we could put together a case that some of these are not necessarily essential but we can take it as a useful list for all practical purposes, which is very much where Starzyk and Prasad are coming from.

They give a quick review to some scientific theories (oddly John Searle gets a mention here) and then some philosophical ones. This is avowedly not a comprehensive survey, nor a very lengthy one and the selection is idiosyncratic. Prominent mentions go to Rosenthal’s HOT (Higher Order Theory – roughly the view that a thought is conscious if there is another thought about it) and Baars’ Global Workspace (which must be the most-cited theoretical model by some distance).

In an unexpected and refreshing leap away from the Anglo-Saxon tradition they also cite Nisargadatta, a leading exponent of the Advaita tradition on the distinction between awareness and consciousness. The two words have been recruited to deal with a lot of different distinctions over the years: generally I think ‘awareness’ tends to get used for mere sensory operation or something of that kind which ‘consciousness’ is reserved for something more ruminative. Nisargadatta seems to be on a different tack:

When he uses the term “consciousness”, he seems to equate that term with the knowledge of “I Am”. On the other hand, when he talks about “awareness”, he points to the absolute, something altogether beyond consciousness, which exists non-dualistically irrespective of the presence or absence of consciousness. Thus, according to him, awareness comes first and it exists always. Consciousness can appear and disappear, but awareness always remains.

But then we’re also told:

 A plant may be aware of a damage done to it [!], but it cannot be conscious about it, since it is not intelligent. In a similar way a worm cut in half is aware and may even feel the pain but it is not conscious about its own body being destroyed.

To allow awareness to a plant seems like the last extreme of generosity in this area – right at the final point before we tip over into outright panpsychism- but in general these remarks makes it sound more as if we’re dealing with something like the kind of distinction we’re most used to.

Then we have a section on evolution and the development of consciousness, including a table. While I understand that Starzyk and Prasad are keen to situate their account securely in an evolutionary context, it does seem premature to me to discuss the evolution of consciousness before you’ve defined it. In the case of giraffe’s necks, say, there may not be too much of a difficulty, but consciousness is a slippery concept and there is always a danger that you end up discussing something different from, and simpler than, what you first had in mind.

So on to the definition.

…our aim was not to remove ambiguity from philosophers’ discussion about intelligence or various types of intelligence, but to describe mechanisms and the minimum requirements for the machine to be considered intelligent. In a similar effort, we will try to define machine consciousness in functional terms, such that once a machine satisfies this definition, it is conscious, disregarding the level or form of consciousness it may possess.

 I felt a slight sense of foreboding at this, but here at last is the definition itself.

A machine is conscious if besides the required mechanisms for perception, action, learning and associative memory, it has a central executive that controls all the processes (conscious or subconscious) of the machine…

…[the] central executive, by relating cognitive experience to internal motivations and plans, creates self-awareness and conscious states of mind.

 I’m afraid this too reminds me of the old days when we were routinely presented with accounts that described a possible architecture without ever giving us any explanation of how this particular arrangement of functions gave rise to the magic inner experience of consciousness itself. We ask ourselves: could we put together a machine with a central executive of this kind that wasn’t conscious at all? It’s hard to see why not.

I suppose the central executive idea is why Starzyk and Prasad feel keen on the Global Workspace, though I think it would be wrong to take that as genuinely similar; rather than a central control module, it’s a space where miscellaneous functions can co-operate in an anarchic but productive manner.

Starzyk and Prasad go on to flesh out the  model, which consists of three main functional blocks –  Sensory-motor, Episodic Memory and Learning, and Central Executive; but none of this helped me with the basic question. There’s also an interesting suggestion that attention switching is analogous to the saccades performed by our eyes. These are instant ballistic movements of the eye’s direction towards things of potential interest (like a flash of light in a dark place); I’m not precisely sure how the analogy works but it seems thought-provoking at least.

They also provide a very handy comparison with a number of other models – a useful section, but it sort of undermines the earlier assertion that there is a gap in this area waiting for attention.

Overall, it turns out that the sense of older times we got at the beginning of the paper is sort of borne out by the conclusion. Many technically inclined people have always been impatient with the philosophical whiffling and keen to get on with building the machine instead, confident that the theoretical insights would bloom from a soil fertilised with practical achievements. But by now we can surely say we’ve tried that, and it didn’t work. Researchers forging ahead without a clear philosophy ran into the frame problem, or simple under-performance, or the inability of their machines to deal with intentionality and meaning. They either abandoned the attempt, adopted more modest goals, or got sidetracked into vast enabling projects such as the compilation of a total encyclopedia to embody background knowledge or the neuronal modelling of an entire brain. In a few cases brute force approaches actually ended up yielding some practical results, but never touched the core issue of consciousness.

I don’t quite know whether it’s sort of depressing that effort is still being devoted to essentially doomed avenues, or sort of heartening that optimism never quite dies.

So I hear

EarWe have become accustomed over the years to exaggerated claims about brain research.  I think my favourite will always be the unexpected claim from British Telecom in 1996 that they were to develop a chip, by 2025, which would fit behind the human eye.  I suspect the original idea was to record all retinal activity and hence have, in a sense, a record of the person’s whole visual experience – an extremely ambitious but not inherently impossible goal; but somewhere along the way they convinced themselves they would be able to record, not just optical input, but people’s thoughts, too.  Realising there was no point in under-selling this amazing future achievement, they announced they were going to call it the ‘Soul Catcher’. Dr Chris Winter was prepared to go still further and went on record as saying that this was actually “the end of death”.  I wonder how the project is getting along now?

 Not many researchers can match the scope of Dr Winter’s imagination or the sheer insolence of his chutzpah, but we have often seen claims to have decoded the mind which are essentially based on identifying simple correspondences between scan results and an item of mental activity. The subject is shown a picture of, say, John Malkovich and scan results obtained: then from scan results the researchers succeed in telling with statistically significant rates of success the occasions when the subject is looking at the same picture of John Malkovich and not one of John Cusack. Voila! The secret language of the brain is cracked!  It is not shown that similar scan patterns can be obtained from other subjects looking at the picture of John Malkovich, or from the same subject looking at other pictures of John Malkovich, or thinking about John Malkovich, or even from the same subject looking at the same picture the next day. No general encoding of mental activity is revealed, no general encoding of visual activity, in fact we don’t even get for sure a general encoding of that particular picture of John Malkovich in that particular subject on that particular day. The only truth securely revealed is that if you have an experience and then soon afterwards another one just like it, you probably use quite a few of the same neurons in responding to it.

So it was with a certain sinking feeling that I heard the BBC announce that researchers had decoded the language of the brain. The radio report was quite definite about it; they could now reconstruct with their receiving equipment words that subjects were merely thinking: they played back the sound of a suitably robotic-sounding word apparently picked up from someone’s inner thoughts. They suggested this could be brought into use in identifying and communicating with ‘locked-in’ patients, those who though immobilised remain mentally alert. Similar reports appear elsewhere in the press today.

The paper behind all this is here: as often happens, the paper is far more circumspect than the publicity. It makes generally modest and well-supported claims; only in one place does it venture a little speculation, and doesn’t pretend then that it’s anything else.

What actually happened is that the experimenters took advantage of an unusual therapeutic situation which allowed them to record directly from electrodes on the brain – a technique which yields far better resolution than any form of scanning. They read their subjects a list of words and noted patterns of activity; they were then able to produce a programme which automatically reconstructed the characteristics of the sound being heard, sufficiently well for the right word from the list to be identified with a high level of success.

This is not without interest – it sheds some light on the brain’s processing of heard information. It shows that quite a lot of information about the actual sound survives at least some distance into the processing – a result we can perhaps compare with visual processing. The kind of worries I alluded to above about generalisability are not absent here, but we do seem, if I’ve understood correctly, to have got results that should be transferrable and reproducible between subjects.

But reading thoughts?  Let’s not worry about that for a moment and ask ourselves whether a much improved version of this technology could tell us what someone was saying as well as what someone was hearing. It seems to me that that would be a whole new game.  When the brain interprets sounds as words it necessarily concerns itself with the properties of sound, because that’s what it has to deal with; when it delivers an utterance it has no direct interest in the matter, only in tongue, palate, breathing and so on (that may be over-simplifying a touch, admittedly – it would be surprising, for example if feedback didn’t play a significant role). It’s not likely that the neural patterns for recognising a spoken word are the same as those for speaking it, any more than the neural activity required for reading a word is generally the same as writing it, except inasmuch as both probably involve thinking about the word. For once Heraclitus is wrong: the path up is not the same as the path down.

So what about thinking? Is it like hearing, or like speaking? Well, I doubt very much whether thinking of John Malkovich, or even thinking of the words ‘John Malkovich’ necessarily resembles decoding a sound or preparing to manipulate the lips – unless we are deliberately going through the act of mentally entertaining the idea of hearing or speaking. In the latter case it might plausibly be the case that there are at least some mirror neurons involved in both activities which would bridge the gap between thought and act sufficiently to produce some recognisable activity.

So, if these results can be generalised to a system capable of recognising words in general, and if it’s one that demonstrably works for different subjects, and if a way can be found of running it without taking the top of the skull off, and if it turns out that thinking about the sound of a word is sufficiently connected to actually hearing it to stimulate neural patterns which are sufficiently similar that the system can still pick them up  in an identifiable form, and if we’re talking about someone who is deliberately thinking about the sound of a word, then yes there is some hope here that in that sense we might be able in practice to identify the word being thought.

I suppose that must be worth half a cheer.