Rosehip Neurons of Consciousness

A new type of neuron is a remarkable discovery; finding one in the human cortex makes it particularly interesting, and the further fact that it cannot be found in mouse brains and might well turn out to be uniquely human – that is legitimately amazing. A paper (preprint here) in Nature Neuroscience announces the discovery of ‘rosehipneurons , named for their large, “rosehip”-like axonal boutons.

There has already been some speculation that rosehip neurons might have a special role in distinctive aspects of human cognition, especially human consciousness, but at this stage no-one really has much idea. Rosehip neurons are inhibitory, but inhibiting other neurons is often a key function which could easily play a big role in consciousness. Most of the traffic between the two hemispheres of the human brain is inhibitory, for example, possibly a matter of the right hemisphere, with its broader view, regularly ‘waking up’ the left out of excessively focused activity.

We probably shouldn’t, at any rate, expect an immediate explanatory breakthrough. One comparison which may help to set the context is the case of spindle neurons. First identified in 1929, these are a notable feature of the human cortex and at first appeared to occur only in the great apes – they, or closely analogous neurons, have since been spotted in a few other animals with large brains, such as elephants and dolphins. I believe we still really don’t know why they’re there or what their exact role is, though a good guess seems to be that it might be something to do with making larger brains work efficiently.

Another warning against over-optimism might come from remembering the immense excitement about mirror neurons some years ago. Their response to a given activity both when performed by the subject and when observed being performed by others, seemed to some to hold out a possible key to empathy, theory of mind, and even more. Alas, to date that hope hasn’t come to anything much, and in retrospect it looks as if rather too much significance was read into the discovery.

The distinctive presence of rosehip neurons is definitely a blow to the usefulness of rodents as experimental animals for the exploration of the human brain, and it’s another item to add to the list of things that brain simulators probably ought to be taking into account, if only we could work out how. That touches on what might be the most basic explanatory difficulty here, namely that you cannot work out the significance of a new component in a machine whose workings you don’t really understand to begin with.

There might indeed be a deeper suspicion that a new kind of neuron is simply the wrong kind of thing to explain consciousness. We’ve learnt in recent years that the complexity of a single neuron is very much not to be under-rated; they are certainly more than the simple switching devices they have at times been portrayed as, and they may carry out quite complex processing. But even so, there is surely a limit to how much clarification of phenomenology we can expect a single cell to yield, in the absence of the kind of wider functional theory we still don’t really have.

Yet what better pointer to such a  wider functional theory could we have than an item unique to humans with a role which we can hope to clarify through empirical investigation? Reverse engineering is a tricky skill, but if we can ask ourselves the right questions maybe that longed-for ‘Aha!’ moment is coming closer after all?

 

Meh-bots

Do robots care? Aeon has an edited version of the inaugural Margaret Boden Lecture, delivered by Boden herself. You can see the full lecture above. Among other things, she tells us that the robots are not going to take over because they don’t care. No computer has actual motives, the way human beings do, and they are indifferent to what happens (if we can even speak of indifference in a case where no desire or aversion is possible).

No doubt Boden is right; it’s surely true at least that no current computer has anything that’s really the same as human motivation. For me, though, she doesn’t provide a convincing account of why human motives are special, and why computers can’t have them, and perhaps doesn’t sufficiently engage with the possibility that robots might take over the world (or at least, do various bad out-of-control things) without having human motives, or caring what happens in the fullest sense. We know already that learning systems set goals by humans are prone to finding cheats or expedients never envisaged by the people who set up the task; while it seems a bit of a stretch to suppose that a supercomputer might enslave all humanity in pursuit of its goal of filling the world with paperclips (about which, however, it doesn’t really care), it seems quite possible real systems might do some dangerous things. Might a self-driving car (have things gone a bit quiet on that front, by the way?) decide that its built-in goal of not colliding with other vehicles can be pursued effectively by forcing everyone else off the road?

What is the ultimate source of human motivation? There are two plausible candidates that Boden doesn’t mention. One is qualia; I think John Searle might say, for example, that it’s things like the quake of hunger, how hungriness really feels, that are the roots of human desire. That nicely explains why computers can’t have them, but for me the old dilemma looms. If qualia are part of the causal account, then they must be naturalisable and in principle available to machines. If they aren’t part of the causal story, how do they influence human behaviour?

Less philosophically, many people would trace human motives to the evolutionary imperatives of survival and reproduction. There must be some truth in that, but isn’t there also something special about human motivation, something detached from the struggle to live?

Boden seems to rest largely on social factors, which computers, as non-social beings, cannot share in. No doubt social factors are highly important in shaping and transmitting motivation, but what about Baby Crusoe, who somehow grew up with no social contact? His mental state may be odd, but would we say he has no more motives than a computer? Then again, why can’t computers be social, either by interacting with each other, or by joining in human society? It seems they might talk to human beings, and if we disallow that as not really social, we are in clear danger of begging the question.

For me the special, detached quality of human motivation arises from our capacity to imagine and foresee. We can randomly or speculatively envisage future states, decide we like or detest them, and plot a course accordingly, coming up with motives that don’t grow out of current circumstances. That capacity depends on the intentionality or aboutness of consciousness, which computers entirely lack – at least for now.

But that isn’t quite what Boden is talking about, I think; she means something in our emotional nature. That – human emotions – is a deep and difficult matter on which much might be said; but at the moment I can’t really be bothered…

 

Kolmogorov meaning

Is the meaning of life non-computable? Noson F Yanofsky has a nice rumination about Kolmogorov complexity and meaning in Nautilus. Kolmogorov complexity, he tells us, is a measure based on the length of the shortest computer program that can generate a given string of digits. The shorter the program, the lower the complexity. This way of looking at it means that structure, which informally we might see as a sign of complexity, actually tends to reduce it, the most complex numbers being completely random strings.

The number giving the Kolmogorov complexity should be determined by the length of the shortest program which can generate the number. Interestingly, though, determining the shortest program is a non-computable problem, something that can apparently be proved by reductio, though Yanofsky does not trouble his readers with the actual proof.

Yanofsky notes that our efforts to give meaning to events generally takes the form of looking for patterns, something you could see as loosely analogous to identifying the programs that generate the Kolmogorov values. Perhaps this too, is an intractable problem, he suggests, so that we can never be sure there isn’t a deeper meaning we have missed; our search will never be over.

This is a matter of general philosophical pondering rather than a tight logical argument, but I found it a congenial point of view that slots easily in alongside the family of non-computable issues to do with the frame problem, radical translation, and so on which lie around the fringes of consciousness and suggest that it too involves dealing with problems that are computationally intractable.

In fact, while Yanofsky speaks of meaning in a very general sense, I think we might apply a similar analogy to meaning in the sense of intentionality or ‘aboutness’. Let’s suppose we are dealing with strings of characters and the task is to identify what those characters mean (let’s keep it simple and suppose ‘what it means’ is just a matter of identifying which physical object is referred to by the string of characters). Now any string of characters has an idefinitely long list of possible interpretations. We could read them in English, in French, or in any arbitrary encoding we care to devise, so that, in short, we can make them be about anything. But Grice helpfully tells us that we can assume in interpreting human utterances that only the minimum of necessary and relevant information has been provided. So in a way rather similar to the search for a Kolmogorov value, we need to look for the simplest meaning that would have required this string of characters to convey it.

Is that, too, a non-computable problem? Intuitively, I feel it probably is, though I’m not quite sure how we could shape a proof – perhaps something about examining a tricksy self-referential string like ‘the meaning of this string is not determinable’, but I leave that as an exercise for the reader.

Curiously, this reasoning seems to imply that all strings of characters that have a meaning at all have one correct meaning. We just can’t be sure (computationally at least) what it is.

Mary and the Secret Stones

Adam Pautz has a new argument to show that consciousness is irreducible (that is, it can’t be analysed down into other terms like physics or functions). It’s a relatively technical paper – a book length treatment is forthcoming, it seems – but at its core is a novel variant on the good old story of Mary the Colour Scientist. Pautz provides several examples in support of his thesis, and I won’t address them all, but a look at this new Mary seems interesting.

Pautz begins by setting out a generalised version of how plausible reductive accounts must go. His route goes over some worrying territory – he is quite Russellian, and he seems to take for granted the old and questionable distinction between primary and secondary qualities. However, if the journey goes through some uncomfortable places, the destination seems to be a reasonable place to be. This is a  moderately externalist kind of reduction which takes consciousness of things to involve a tracking relation to qualities of real things out there. We need not worry about what kinds of qualities they are for current purposes, and primary and secondary qualities must be treated in a similar way. Pautz thinks that if he can show that reductions like this are problematic, that amounts to a pretty good case for irreducibility.

So in Pautz’s version, Mary lives on a planet where the outer surfaces of everything are black, grey, or white. However, on the inside they are brilliantly coloured, with red, reddish orange, and green respectively. All the things that are black outside are red inside, and so on, and this is guaranteed by a miracle ‘chemical’ process such that changes to the exterior colour are instantly reflected in appropriate changes inside. Mary only sees the outsides of things, so she has never seen any colours but black, white and grey.

Now Mary’s experience of black is a tracking relation to black reflectances, but in this world it also tracks red interiors. So does she experience both colours? If not, then which? A sensible reductionist will surely say that she only experiences the external colour, and they will probably be inclined to refine their definitions a little so that the required tracking relation requires an immediate causal connection, not one mediated through the oddly fixed connection of interior and exterior colours. But that by no means solves the problem, according to Pautz. Mary’s relation to red is only very slightly different to her relation to black. Similar relations ought to show some similarity, but in this case Mary’s relation to black is a colour experience, whereas her relation to red, intrinsically similar, is not a colour experience – or an experience of any kind! If we imagine Martha in another world experiencing a stone with a red exterior, then Martha’s relation to red and Mary’s are virtually identical, but have no similarity whatever. Suppose you had a headache this morning, suggests Pautz, could you then say that you were in a nearly identical state this afternoon, but that it was not the state of experiencing a headache; in fact it was no experience at all (not even, presumably, the experience of not having a headache).

Pautz thinks that examples of this kind show that reductive accounts of consciousness cannot really work, and we must therefore settle for non-reductive ones. But he is left offering no real explanation of the relation of being conscious of something; we really have to take that as primitive, something just given as fundamental. Here I can’t help but sympathise with the reductionists; at least they’re trying! Yes, no doubt there are places where explanation has to stop, but here?

What about Mary? The thing that troubles me most is that remarkable chemical connection that guarantees the internal redness of things that are externally black. Now if this were a fundamental law of nature, or even some logical principle, I think we might be willing to say that Mary does experience red – she just doesn’t know yet (perhaps can never know?) that that’s what black looks like on the inside. If the connection is a matter of chance, or even guaranteed by this strange local chemistry, I’m not sure the similarity of the tracking relations is as great as Pautz wants it to be. What if someone holds up for me a series of cards with English words on one side? On the other, they invariably write the Spanish equivalent. My tracking relation to the two words is very similar, isn’t it, in much the same way as above? So is it plausible to say I know what the English word is, but that my relation to the Spanish word is not that of knowing it – that in fact that relation involves no knowledge of any kind? I have to say I think that is perfectly plausible.

I can’t claim these partial objections refute all of Pautz’s examples, but I’m keeping the possibility of reductive explanations open for now.