Retinoid Consciousness

Arnold TrehubIt’s not often these days, it seems to me, that a ‘theatre of consciousness’ is proposed or defended: if the idea is mentioned, it’s more commonly to dismiss it or to distance a theory from it.  All the more credit then, to Arnold Trehub whose Retinoid theory is just that, and a bold and sweeping example, too.

A retinoid system, at its simplest, is an array of neurons which captures and retains a pattern of activity in the retina, the sensitive layer at the back of the eye which translates light into neuronal activity. The retinoid array retains the topology of the original pattern and because the neurons are autaptic, ie they re-stimulate themselves, that pattern can be retained for a while. Naturally we need more than one such array to do anything much, and in fact the proposition is that a series of them act together to form a model of three-dimensional space. That makes it sound a simple matter, but of course it isn’t. Trehub proposes a series of mechanisms by which raw activity in the retina can be translated into a stable, three-dimensional model, making allowance, for example, for the way our foveas (the small central parts of the retinas that actually do most of the important work – our vision is actually only sharp in a tiny central area of the visual field) sweep restlessly but selectively across the scene and so on.

Those proposals look ingenious and plausible. I wonder a bit about what the firing of a neuron in the retinoid model of space actually represents, in the end? It isn’t simply patterns of retinal activity any more, because the original patterns have been amended and extended – so you might argue that the system is not as retinoid as all that.  It’s tempting to think that each point of activity is a bit like a voxel, a three-dimensional pixel or a single tiny brick in a model world, but I’m not sure that’s right either. It certainly goes beyond that because Trehub wants to bring in all the sensory modalities. How a spatial model works in this respect is not fully clear to me – I’m not sure how we deal with the location of smells, for example, something that seems inherently vague in reality. Perhaps smells are always registered in the retinoid as situated within the nose, although that doesn’t seem to capture the experience altogether; and what about hearing? It’s a common enough experience to hear a sound without having any clear idea of its point of origin, but it won’t quite do to say that sounds are located in the head either. I don’t know how “sounds sort of like it’s coming from over there” is represented in a spacial retinoid model. If we’re going on to deal with the whole world of mental phenomena, we’re clearly going to run into more problems over location – where are my meditations, my emotions, my intimations of immortality, my poignant apostrophisation of the snows of yesteryear?

Another key element in the system is a self-locus retinoid whose job is essentially to record the presumed location of the self in experienced space (and also, if I’ve understood correctly, imagined space, because we can use other retinoid structures to model non-existent spaces, or represent to ourselves our moving through locations we don’t actually occupy). This provides the basis for our sense of self, but importantly there’s much more to it than that.

Trehub suggests that the brain also provides a semantic token system, so that when we see a dog, somewhere in our brain a set of neurons that make up the ‘dog’ token start firing. We also have a token of the self-locus (roughly speaking this amounts to a token of ourselves or something like the idea of ‘me’), which Trehub designates I!  This gives us our sense of identity and pulls together all our tokens and perceptions. It also has an interesting role in the management of our beliefs. In Trehub’s view our beliefs are encoded by tokens in a semantic network as explicit propositions: whether we consider these propositions true or false is determined by a linkage with a special token which indicates the truth value, while a similar linkage to I! determines whether these beliefs belong to us. I presume a linkage to the ‘Fred’ token makes them, in our view, Fred’s beliefs, providing the basis of a ‘theory of mind’.

We now have in place the essentials for the theatre of consciousness: a retinoid-supported model of egocentric space (Trehub calls it egocentric, though it seems to me the self is always just to the side of it looking in), and a prime actor endowed with beliefs and perceptions. Trehub denies that we need an homuncular audience,  a ‘little man in our head’.  He casts us as actors on our own stage; if there is an audience it’s the unconscious parts of the mind. He accepts that the theatre is in a sense illusory – the representations are not the reality they stand in for – but he wishes to keep the baby even at the price of retaining that much bathwater.

So what can we say about all this? As a theory of consciousness it’s dominated to an unusual degree by vision and space. This fits Trehub’s view of consciousness: he regards the most basic form as a simple sense of presence in a space, with more complex levels involving an awareness of self and then of everything else. I’m not sure a spatially-based conception fits my idea of consciousness so well. When I examine my own mind I find many of the most salient thoughts and feelings have no location; it’s not that their location is vague or unknown, space just doesn’t come into it. Trehub suggests that when we think of a dog, we’re liable to have an image of a dog in our mind: while we can have one, it doesn’t seem an essential part of the process to me. If I’m thinking of an arbitrary dog, I don’t call up an image, let alone one situated in an imaginary space: if you ask me what colour or breed my arbitrary dog is, I have to make something up in order to answer. (In fact the position is complicated because reflection suggests there are actually at least half-a dozen different ways of thinking about an arbitrary dog: some feature images, some don’t.)

Second, the theory centres on a model of space, but I’m not sure how much having a model does for us. Putting it radically, if you can work out what you’re going to do with your model, why not do that to reality instead and throw the model away? Now of course, models are useful for working on counterfactuals; predicting outcomes, testing contingencies and plans, and so on, but here we’re mainly talking about perception. If we stick to pure perception, doesn’t it seem that the model merely introduces another stage where errors might creep in, a stage which apparently insulates us from reality to the point that our perceptions are actually in some sense illusions?  Trehub might respond that the model has evolved for its value in dealing with predictions and planning, but also provides current perception; maybe, but there’s another problem there. The idea that my current perception of the world and my plans about next week are essentially running on the same system is not intuitively appealing – they feel very different.

The inevitable fear with a model-based system is that the real work is being deferred to an implied homunculus; that the model is, in effect, there for a ‘man in our head’ to look at. Trehub of course denies this, but the suspicion is reinforced in this case by the way the model preserves the topology of the retinal image: isn’t it a little odd that, as it were, the process of perceiving a shape should itself have the same shape?

Trehub has a robust response available, however; the evidence shows clearly that the brain does in fact produce models of perceived objects, filling in the missing bits, resolving ambiguities and making inferences. Many optical illusions arise from the fact that the pre-conscious parts of our visual system don’t tell us which bits of the world they’ve hypothesised, but present the whole thing as truly and unambiguously out there. Perception is not, after all,  just a simple input procedure but a complex combination of top-down and bottom-up processes in which models can have an essential role. And while I don’t think the retinoid structure specifically has been confirmed by research, so far as my limited grasp of the neurology goes it seems to be fully consistent with the way things seem to work.  Although it may still seem surprising it’s undeniably the case, for example, that the visual pathways of the brain preserve retinal topology to a remarkable degree. So as a theory of perception at least, the retinoid system is not easily dismissed. As a model of general consciousness, I’m not so sure. Containing a model of x is not, after all, the same thing as being aware of x. We need more.

What about the self, then? It’s natural that given Trehub’s spatial perspective he should focus on defining the location of the self, but that only seems to be a small, almost incidental part of our sense of self.  Personally, I’m inclined to put the thoughts first, and then identify myself as their origin; I identify myself not by location but by a kind of backward extrapolation to the abstract-seeming origin of my mental activity. This has nothing to do with physical space.  Of course Trehub’s system has more to it than mere location, in the special tokens used to signify belonging to me and truth. But this part of the theory seems especially problematic.  Why should simply flagging a spatial position and some propositions as mine endow a set of neurons with a sense of selfhood, any more than flagging them as Fred’s?  I can easily imagine that location and the same set of propositions being someone else’s, or no-one’s. I think Trehub means that linking up the tokens in this way causes me to view that location as mine and those propositions as my beliefs, but notice that in saying that I’m smuggling in a self who has views about things and a capacity for ownership;  I’ve inadvertently and unconsciously brought in that wretched homunculus after all.  For that matter, why would flagging a proposition as a belief turn it into one? I can flag up propositions in various ways on a piece of paper without making them come to intentional life. To believe something you have to mean it, and unfortunately no-one really knows what ‘meaning it’ means – that’s one of the things to be explained by a full-blown theory of consciousness.

Moreover, the system of tokens and beliefs encoded in explicit propositions seems fatally vulnerable to the wider form of the frame problem. We actually have an infinite number of background beliefs (Julius Caesar never wore a top hat) which we’ve never stated explicitly but which we draw on readily, instantly, without having to do any thinking, when they become relevant (This play is supposed to be in authentic costume!): but even if we had a finite set of propositions to deal with the task of updating them and drawing inferences from them rapidly becomes impossible through a kind of combinatorial explosion. (If this is unfamiliar stuff, I recommend Dennett’s seminal cognitive wheels paper.) It just doesn’t seem likely nowadays that logical processing of explicit propositions is really what underlies mental activity.

Some important reservations then, but it’s important not to criticise Trehub’s approach for failing to be a panacea or providing all the answers on consciousness – that’s not really what we’re being offered.  If we take consciousness to mean awareness, the retinoid system offers some elegant and plausible mechanisms. It might yet be that the theatre deserves another visit.

Output consciousness

OutputThe analogy with a digital computer has energised and strongly influenced our thinking about the human mind for at least sixty years, beginning with Turing’s seminal paper of 1950, ‘Computing machinery and intelligence’, and gaining in influence as computers became first real, and then ubiquitous. Whether or not you like the analogy, I think you’d have to concede that it has often set the terms of the discussion over recent decades. Yet we’ve never got it quite clear, and in some respects we’ve almost always got it wrong.

In particular, I’d like to suggest: consciousness is an output, not processing.

At first sight it might seem that consciousness can’t be an output, on the simple grounds that it isn’t, well, put out. Our consciousness is internal, it goes on in our heads – how could that be an output? I don’t, of course, mean it’s an output in that literal sense of being physically emitted: rather, I mean it’s the final product of a process, in this case a mental process. It may often be retained in our heads, but in some sense it’s the end of the line, the result.

It may be worth noting in passing that consciousness is pretty strongly linked with outputs in the simpler sense, though: so much so that the Turing test is based entirely on the ability of the testee to output strings of characters which gain the approval of the judges. Quality of output is taken to be the best possible sign of the presence of consciousness.

Wait a minute, you may say, consciousness isn’t a final output, it’s surely part of the process: what goes on in our conscious mind feeds back into our further thoughts and our behaviour. That’s the whole point of it, surely; to allow more complex and detached forms of processing to take place so that our true outputs in behaviour will eventually be better planned and targeted?

It’s true that the contents of consciousness may feed back into our mental processes, and that must be at least partly why it exists (its role in forming genuine verbal outputs is probably significant too) – I’m not suggesting consciousness is a mere epiphenomenon, like, as they say, the whistle on a train. Items from consciousness may be inputs as well as outputs. To take an unarguable example, I’ve never managed to remember how many days there are in each month: but I have managed to remember that little rhyme which contains the information. So if I need to know how many days there are in August, I recall the rhyme and repeat it to myself: in this case the contents of my consciousness are helpfully fed back into my mind. Apart from clunky manoeuvres of this kind, though, I think careful introspection suggests consciousness does not feed directly back into the underlying mental processes all that often. If we want to make a decision we may hold the alternatives in mind and present them to ourselves in sequence, but what we’re waiting for is a feeling or a salient piece of reasoning to pop into our minds from some lower, essentially inscrutable process: we’re not normally putting our own thoughts on the subject together by hand.  I think Fodor once said he had no conscious access to the mental processes which produced his views on any philosophical issue: if he inspected the contents of his mind while cogitating about a particular problem all he came up with were sub-articulate thoughts approximately like “Come on, Jerry!”  I feel much the same.

With apologies if I’m repeating things I’ve said before, I think it may help if I mention some of the confusions that I think arise from not recognising the output nature of consciousness. A striking example is Dennett’s odd view that consciousness might involve a serial computer simulated on a parallel machine. We know, of course, that when people speak of the brain being ‘massively parallel’ they usually mean that many different functional areas are promiscuously interconnected, something radically different from massively parallel computing in the original sense of a carefully managed set of isolated processes; but Dennett seems to be motivated by an additional misunderstanding in which it is assumed that only a serial process can give rise to a coherent serial consciousness. Not at all: the outputs from parallel and serial processing are identical (they’d better be): it’s just that the parallel approach sometimes gets there quicker.

It’s a little unfair to single out Dennett: the same assumption that properties of the underlying process must also be properties of the output consciousness can be discerned elsewhere: it’s just that Dennett is clearer than most. Another striking example might be Libet’s notorious finding that consciousness of a decision arrives some time after the decision itself – but of course it does! The decision is an event in processes of which consciousness is the output.

It’s hard to see consciousness as an output, partly because it can also be an  input, but also because we identify ourselves with our thoughts. We want to believe that we ourselves enjoy agency, that we have causal effects, and so we’re inclined to believe that our thoughts are what does the trick – although we know quite well that when we move our arm it’s not thinking about it that makes it happen. This supposed identity of thoughts and self (after all, it’s because I think, that I am, isn’t it?) is so strong that some, failing to find in their thoughts anything but fleeting bundles of momentary impressions , have concluded there is no self after all. I think that level of scepticism is unwarranted: it’s just that our selves remain inscrutably shadowed to direct conscious observation. “Know thyself”, said the inscription on the temple of the Delphic oracle – alas, ultimately we can’t.