So I hear

EarWe have become accustomed over the years to exaggerated claims about brain research.  I think my favourite will always be the unexpected claim from British Telecom in 1996 that they were to develop a chip, by 2025, which would fit behind the human eye.  I suspect the original idea was to record all retinal activity and hence have, in a sense, a record of the person’s whole visual experience – an extremely ambitious but not inherently impossible goal; but somewhere along the way they convinced themselves they would be able to record, not just optical input, but people’s thoughts, too.  Realising there was no point in under-selling this amazing future achievement, they announced they were going to call it the ‘Soul Catcher’. Dr Chris Winter was prepared to go still further and went on record as saying that this was actually “the end of death”.  I wonder how the project is getting along now?

 Not many researchers can match the scope of Dr Winter’s imagination or the sheer insolence of his chutzpah, but we have often seen claims to have decoded the mind which are essentially based on identifying simple correspondences between scan results and an item of mental activity. The subject is shown a picture of, say, John Malkovich and scan results obtained: then from scan results the researchers succeed in telling with statistically significant rates of success the occasions when the subject is looking at the same picture of John Malkovich and not one of John Cusack. Voila! The secret language of the brain is cracked!  It is not shown that similar scan patterns can be obtained from other subjects looking at the picture of John Malkovich, or from the same subject looking at other pictures of John Malkovich, or thinking about John Malkovich, or even from the same subject looking at the same picture the next day. No general encoding of mental activity is revealed, no general encoding of visual activity, in fact we don’t even get for sure a general encoding of that particular picture of John Malkovich in that particular subject on that particular day. The only truth securely revealed is that if you have an experience and then soon afterwards another one just like it, you probably use quite a few of the same neurons in responding to it.

So it was with a certain sinking feeling that I heard the BBC announce that researchers had decoded the language of the brain. The radio report was quite definite about it; they could now reconstruct with their receiving equipment words that subjects were merely thinking: they played back the sound of a suitably robotic-sounding word apparently picked up from someone’s inner thoughts. They suggested this could be brought into use in identifying and communicating with ‘locked-in’ patients, those who though immobilised remain mentally alert. Similar reports appear elsewhere in the press today.

The paper behind all this is here: as often happens, the paper is far more circumspect than the publicity. It makes generally modest and well-supported claims; only in one place does it venture a little speculation, and doesn’t pretend then that it’s anything else.

What actually happened is that the experimenters took advantage of an unusual therapeutic situation which allowed them to record directly from electrodes on the brain – a technique which yields far better resolution than any form of scanning. They read their subjects a list of words and noted patterns of activity; they were then able to produce a programme which automatically reconstructed the characteristics of the sound being heard, sufficiently well for the right word from the list to be identified with a high level of success.

This is not without interest – it sheds some light on the brain’s processing of heard information. It shows that quite a lot of information about the actual sound survives at least some distance into the processing – a result we can perhaps compare with visual processing. The kind of worries I alluded to above about generalisability are not absent here, but we do seem, if I’ve understood correctly, to have got results that should be transferrable and reproducible between subjects.

But reading thoughts?  Let’s not worry about that for a moment and ask ourselves whether a much improved version of this technology could tell us what someone was saying as well as what someone was hearing. It seems to me that that would be a whole new game.  When the brain interprets sounds as words it necessarily concerns itself with the properties of sound, because that’s what it has to deal with; when it delivers an utterance it has no direct interest in the matter, only in tongue, palate, breathing and so on (that may be over-simplifying a touch, admittedly – it would be surprising, for example if feedback didn’t play a significant role). It’s not likely that the neural patterns for recognising a spoken word are the same as those for speaking it, any more than the neural activity required for reading a word is generally the same as writing it, except inasmuch as both probably involve thinking about the word. For once Heraclitus is wrong: the path up is not the same as the path down.

So what about thinking? Is it like hearing, or like speaking? Well, I doubt very much whether thinking of John Malkovich, or even thinking of the words ‘John Malkovich’ necessarily resembles decoding a sound or preparing to manipulate the lips – unless we are deliberately going through the act of mentally entertaining the idea of hearing or speaking. In the latter case it might plausibly be the case that there are at least some mirror neurons involved in both activities which would bridge the gap between thought and act sufficiently to produce some recognisable activity.

So, if these results can be generalised to a system capable of recognising words in general, and if it’s one that demonstrably works for different subjects, and if a way can be found of running it without taking the top of the skull off, and if it turns out that thinking about the sound of a word is sufficiently connected to actually hearing it to stimulate neural patterns which are sufficiently similar that the system can still pick them up  in an identifiable form, and if we’re talking about someone who is deliberately thinking about the sound of a word, then yes there is some hope here that in that sense we might be able in practice to identify the word being thought.

I suppose that must be worth half a cheer.

Leave a Reply

Your email address will not be published. Required fields are marked *