Posts tagged ‘Arnold Trehub’

Arnold TrehubRegular readers will be sad to hear that Arnold Trehub,  a fairly regular contributor to discussion on Conscious Entities, died in April.

Arnold TrehubIt’s not often these days, it seems to me, that a ‘theatre of consciousness’ is proposed or defended: if the idea is mentioned, it’s more commonly to dismiss it or to distance a theory from it.  All the more credit then, to Arnold Trehub whose Retinoid theory is just that, and a bold and sweeping example, too.

A retinoid system, at its simplest, is an array of neurons which captures and retains a pattern of activity in the retina, the sensitive layer at the back of the eye which translates light into neuronal activity. The retinoid array retains the topology of the original pattern and because the neurons are autaptic, ie they re-stimulate themselves, that pattern can be retained for a while. Naturally we need more than one such array to do anything much, and in fact the proposition is that a series of them act together to form a model of three-dimensional space. That makes it sound a simple matter, but of course it isn’t. Trehub proposes a series of mechanisms by which raw activity in the retina can be translated into a stable, three-dimensional model, making allowance, for example, for the way our foveas (the small central parts of the retinas that actually do most of the important work – our vision is actually only sharp in a tiny central area of the visual field) sweep restlessly but selectively across the scene and so on.

Those proposals look ingenious and plausible. I wonder a bit about what the firing of a neuron in the retinoid model of space actually represents, in the end? It isn’t simply patterns of retinal activity any more, because the original patterns have been amended and extended – so you might argue that the system is not as retinoid as all that.  It’s tempting to think that each point of activity is a bit like a voxel, a three-dimensional pixel or a single tiny brick in a model world, but I’m not sure that’s right either. It certainly goes beyond that because Trehub wants to bring in all the sensory modalities. How a spatial model works in this respect is not fully clear to me – I’m not sure how we deal with the location of smells, for example, something that seems inherently vague in reality. Perhaps smells are always registered in the retinoid as situated within the nose, although that doesn’t seem to capture the experience altogether; and what about hearing? It’s a common enough experience to hear a sound without having any clear idea of its point of origin, but it won’t quite do to say that sounds are located in the head either. I don’t know how “sounds sort of like it’s coming from over there” is represented in a spacial retinoid model. If we’re going on to deal with the whole world of mental phenomena, we’re clearly going to run into more problems over location – where are my meditations, my emotions, my intimations of immortality, my poignant apostrophisation of the snows of yesteryear?

Another key element in the system is a self-locus retinoid whose job is essentially to record the presumed location of the self in experienced space (and also, if I’ve understood correctly, imagined space, because we can use other retinoid structures to model non-existent spaces, or represent to ourselves our moving through locations we don’t actually occupy). This provides the basis for our sense of self, but importantly there’s much more to it than that.

Trehub suggests that the brain also provides a semantic token system, so that when we see a dog, somewhere in our brain a set of neurons that make up the ‘dog’ token start firing. We also have a token of the self-locus (roughly speaking this amounts to a token of ourselves or something like the idea of ‘me’), which Trehub designates I!  This gives us our sense of identity and pulls together all our tokens and perceptions. It also has an interesting role in the management of our beliefs. In Trehub’s view our beliefs are encoded by tokens in a semantic network as explicit propositions: whether we consider these propositions true or false is determined by a linkage with a special token which indicates the truth value, while a similar linkage to I! determines whether these beliefs belong to us. I presume a linkage to the ‘Fred’ token makes them, in our view, Fred’s beliefs, providing the basis of a ‘theory of mind’.

We now have in place the essentials for the theatre of consciousness: a retinoid-supported model of egocentric space (Trehub calls it egocentric, though it seems to me the self is always just to the side of it looking in), and a prime actor endowed with beliefs and perceptions. Trehub denies that we need an homuncular audience,  a ‘little man in our head’.  He casts us as actors on our own stage; if there is an audience it’s the unconscious parts of the mind. He accepts that the theatre is in a sense illusory – the representations are not the reality they stand in for – but he wishes to keep the baby even at the price of retaining that much bathwater.

So what can we say about all this? As a theory of consciousness it’s dominated to an unusual degree by vision and space. This fits Trehub’s view of consciousness: he regards the most basic form as a simple sense of presence in a space, with more complex levels involving an awareness of self and then of everything else. I’m not sure a spatially-based conception fits my idea of consciousness so well. When I examine my own mind I find many of the most salient thoughts and feelings have no location; it’s not that their location is vague or unknown, space just doesn’t come into it. Trehub suggests that when we think of a dog, we’re liable to have an image of a dog in our mind: while we can have one, it doesn’t seem an essential part of the process to me. If I’m thinking of an arbitrary dog, I don’t call up an image, let alone one situated in an imaginary space: if you ask me what colour or breed my arbitrary dog is, I have to make something up in order to answer. (In fact the position is complicated because reflection suggests there are actually at least half-a dozen different ways of thinking about an arbitrary dog: some feature images, some don’t.)

Second, the theory centres on a model of space, but I’m not sure how much having a model does for us. Putting it radically, if you can work out what you’re going to do with your model, why not do that to reality instead and throw the model away? Now of course, models are useful for working on counterfactuals; predicting outcomes, testing contingencies and plans, and so on, but here we’re mainly talking about perception. If we stick to pure perception, doesn’t it seem that the model merely introduces another stage where errors might creep in, a stage which apparently insulates us from reality to the point that our perceptions are actually in some sense illusions?  Trehub might respond that the model has evolved for its value in dealing with predictions and planning, but also provides current perception; maybe, but there’s another problem there. The idea that my current perception of the world and my plans about next week are essentially running on the same system is not intuitively appealing – they feel very different.

The inevitable fear with a model-based system is that the real work is being deferred to an implied homunculus; that the model is, in effect, there for a ‘man in our head’ to look at. Trehub of course denies this, but the suspicion is reinforced in this case by the way the model preserves the topology of the retinal image: isn’t it a little odd that, as it were, the process of perceiving a shape should itself have the same shape?

Trehub has a robust response available, however; the evidence shows clearly that the brain does in fact produce models of perceived objects, filling in the missing bits, resolving ambiguities and making inferences. Many optical illusions arise from the fact that the pre-conscious parts of our visual system don’t tell us which bits of the world they’ve hypothesised, but present the whole thing as truly and unambiguously out there. Perception is not, after all,  just a simple input procedure but a complex combination of top-down and bottom-up processes in which models can have an essential role. And while I don’t think the retinoid structure specifically has been confirmed by research, so far as my limited grasp of the neurology goes it seems to be fully consistent with the way things seem to work.  Although it may still seem surprising it’s undeniably the case, for example, that the visual pathways of the brain preserve retinal topology to a remarkable degree. So as a theory of perception at least, the retinoid system is not easily dismissed. As a model of general consciousness, I’m not so sure. Containing a model of x is not, after all, the same thing as being aware of x. We need more.

What about the self, then? It’s natural that given Trehub’s spatial perspective he should focus on defining the location of the self, but that only seems to be a small, almost incidental part of our sense of self.  Personally, I’m inclined to put the thoughts first, and then identify myself as their origin; I identify myself not by location but by a kind of backward extrapolation to the abstract-seeming origin of my mental activity. This has nothing to do with physical space.  Of course Trehub’s system has more to it than mere location, in the special tokens used to signify belonging to me and truth. But this part of the theory seems especially problematic.  Why should simply flagging a spatial position and some propositions as mine endow a set of neurons with a sense of selfhood, any more than flagging them as Fred’s?  I can easily imagine that location and the same set of propositions being someone else’s, or no-one’s. I think Trehub means that linking up the tokens in this way causes me to view that location as mine and those propositions as my beliefs, but notice that in saying that I’m smuggling in a self who has views about things and a capacity for ownership;  I’ve inadvertently and unconsciously brought in that wretched homunculus after all.  For that matter, why would flagging a proposition as a belief turn it into one? I can flag up propositions in various ways on a piece of paper without making them come to intentional life. To believe something you have to mean it, and unfortunately no-one really knows what ‘meaning it’ means – that’s one of the things to be explained by a full-blown theory of consciousness.

Moreover, the system of tokens and beliefs encoded in explicit propositions seems fatally vulnerable to the wider form of the frame problem. We actually have an infinite number of background beliefs (Julius Caesar never wore a top hat) which we’ve never stated explicitly but which we draw on readily, instantly, without having to do any thinking, when they become relevant (This play is supposed to be in authentic costume!): but even if we had a finite set of propositions to deal with the task of updating them and drawing inferences from them rapidly becomes impossible through a kind of combinatorial explosion. (If this is unfamiliar stuff, I recommend Dennett’s seminal cognitive wheels paper.) It just doesn’t seem likely nowadays that logical processing of explicit propositions is really what underlies mental activity.

Some important reservations then, but it’s important not to criticise Trehub’s approach for failing to be a panacea or providing all the answers on consciousness – that’s not really what we’re being offered.  If we take consciousness to mean awareness, the retinoid system offers some elegant and plausible mechanisms. It might yet be that the theatre deserves another visit.