Gerald Edelman - selection and re-entry
Linespace

home
Gerald Edelman

Recategorisation                 

Blandula Gerald Edelman's theories are rooted in neurology. In fact, he insists that this is the only foundation for a successful theory of consciousness: the answers are not to be found in quantum physics, philosophical speculation, or computer programming.

The structure of the brain is accordingly a key factor. The neurons in the brain wire themselves up in complex and idiosyncratic patterns patterns during growth and then experience: no two people are wired the same way. The neurons do come to compose a number of structures, however. They form groups which tend to fire together, and for Edelman these groups are the basic operating unit of the brain. The other main structures are maps. An uncontroversial example here might be the way some sheets of neurons reproduce the pattern of activity on the retina at the back of the eye (with some stretching and squashing), but Edelman sees similar strucures as applying much more widely, and mapping not just sensory inputs, but each other and other kinds of neuronal activity. The whole system is bound together by re-entrant connections, sets of paths which provide parallel connections from group A to Group B and Group B back to Group A.

The principle which makes this structure work is Neuronal Group Selection, or Neural Darwinism. Some patterns are reinforced by experience, while many others are eliminated in a selective process which resembles evolution. Edelman draws an analogy with the immune system, which produces a huge variety of random antibodies: those which link successfully to a foreign substance reproduce rapidly. This explains how the body can quickly produce antibodies for substances it has never encountered before (and indeed for substances which never existed in the previous history of the planet): and in an analogous way the Theory of Neuronal Group Selection (TNGS) explains how the brain can recognise objects in the world without having a huge inherited catalogue of patterns, and without an homunculus to do the recognising for it.  

The re-entrant connections between neuronal groups in different parts of the brain co-ordinate impressions from the different senses to provide a coherent, consistent, continuous experience; but re-entry is also the basic mechanism of recategorisation, the fundamental process by which the brain carves up the world into different things and recognises those it has encountered before. The word recategorisation is potentially confusing here for two reasons: first, it is not to be taken as implying the existence of a prior set of categories: in fact, every act of recognition modifies the category; nor is it meant to suggest any parallel with Kant's categories, which limit how we can understand the world. Very much the reverse, in fact.

Edelman attaches great importance to higher-order processes - concepts are maps of maps, and arise from the brain's recategorising its own activity. Concepts by themselves only constitute primary (first-order) consciousness: human consciousness also features secondary consciousness (concepts about concepts), language, and a concept of the self, all built on the foundation of first-order concepts. 

The final key idea in the theory, another one with a slightly misleading name isvalue, a word used here to describe inbuilt tendencies towards particular behaviour. These forms of behaviour may be driven by what we value in a fairly straightforward sense - seeking food, for example, but they also include such inherent actions as the hand's natural tendency to grasp. Edelman seems to think that, like a computer, if left to itself the brain might sit and do nothing. It's the value systems which supply the basic drives. This sort of set-up has been modelled in a series of robots rather cheekily named Darwin I to IV.

Edelman is emphatically opposed to the idea that the brain is a computer , however. 

Linespace

Bitbucket   Being anti-computationalist but using robots to support your theory seems a little strange. It needn't be strictly contradictory, of course, but it does expose the curious fact that while Edelman insists the brain is not a computer, all the processes he describes seem perfectly capable of computerisation. He gives two reasons for not considering the brain a computer: one, that individual brains are wired up in very different ways; and two, that reality is not an orderly program feeding into the brain. Neither of these is convincing. Computers can differ enormously in physical detail while remaining essentially the same - how much similarity is there between a PC and a model Turing machine, for example - and wiring differences between brains might perhaps count as differences in pre-loaded software rather than anything more fundamental. Certainly reality does not structure itself like a program, but why should it? The analogy is with data, not with the program: you have to think of the brain as a computer which has its software loaded already and is dealing with the data coming down a wire from cameras (eyes), microphones (ears), and so on. I see no problem with that.

Linespace

Blandula   The argument is a bit more specific than you make out. Edelman points out that the selective processes he has in mind have an unusual feature he calls 'degeneracy' (I'm not quite sure why). Degeneracy means that the same output can be reached in a whole range of different ways. This is a feature of the immune system as well as mental processes, but it doesn't look much like an algorithm. Of course there are other arguments against considering the brain a computer, but I think Edelman's main point is that to deal with reality, you have to be able to arrange the streams of mixed-up and ever-changing data from the senses into coherent objects. Your computer with a camera attached finds this impossible except in cases where the 'reality' has been made artificially simple - a 'toy world' - and the computer has been set up in advance with lots of information about how to recognise the objects in the 'toy world'. I know you're going to tell me that great strides have been made, and that you only need another couple of decades and it'll all be sorted. 

Linespace

Bitbucket  I wasn't, though it's true . I was just going to point out again that, however difficult it may be to digest reality, Edelman gives us no definite reasons to think computers couldn't do it; his robots even demonstrates some aspects of the methods he thinks most likely. But never mind.You expect me to attack Edelman just because he and Searle have spoken favourably of each other: but actually I've got nothing much against him except that I think he's misunderstood the nature of computationalism.  Just because we haven't got USB ports in the back of our heads it doesn't mean brain activity isn't computable. 

As for that bit about 'degeneracy', I don't see it at all. Imagine we had a job we wanted done by computer - we call in a hundred consultants to tender for the project. They'll find a hundred different ways to do it. Even if we set aside most of the possible variation - whether to use PCs, Macs, Unix boxes or what, Java, C++, visual Basic or whatever. Even if we assume the required outputs are narrowly defined and all the tenderers have to code in bog-standard C, there'll be thousands of variations. So I reckon computers can be degenerate too... 

Linespace

Blandula  I don't expect you to attack Edelman at all. As a matter of fact, I'm not an unqualified admirer myself. Take his views on qualia. The temptation for a scientist is always to miss the point about qualia and end up explaining the mechanics of perception instead (a different issue) Edelman, in spite of his scientist bias, is not philosophically naive and a lot of the time he seems to understand the point perfectly. But in 'A Universe of Consciousness' he swerves at the last minute and ends up talking about how the neurons could map out a colour space - which might be interesting, but it ain't qualia. Perhaps his co-author is to blame.

However, I'm with him on the computer issue. Edelman's views about selection illustrate exactly why computers can't do what the brain does. I think his ideas on this are really important and have possibly been undersold a bit. The thing about programming a computer to deal with real situations is that you have to anticipate every possible kind of problem it might come up against - but there are an infiinite number of different kinds of problem.  Now this is exactly the kind of issue the immune system faced: it had to be ready to deal with any molecule whatever, no matter how novel. The solution is analogous: the immune system fills your body with a really vast number of variant antibodies; your brain is full of an astronomical number of different neuronal patterns. When the problem comes along, even a completely novel one, you're going to have the correct response waiting somewhere: and the one that matches gets reinforced and reused. Edelman called this a Darwinian process: it isn't really (hence Crick's joke about it really being 'neural Edelmanism'): the remarkable thing is, it might be better than Darwinian in this context!

Linespace

Bitbucket   Anything's better than Darwin to you, up to and including spontaneous generation and Divine Creation. 

Linespace

BlandulaNonsense! But, honestly. It's not particularly original to suggest that the mind might use selective or Darwinian mechanisms, (or be infected with memes evolved in the memosphere) but normal Darwinian selection is just obviously not the answer. When we confront a sabre-toothed tiger or think what to say to a question in an interview, we don't start by copying some earlier response, try it out repeatedly and gradually refine it by random mutation. We don't even do that in our heads, normally. 99% of the time, the response is instant, and appropriate, with nothing random about it at all. It's a bit easier to understand how this could be so on the Edelman theory, because some reasonably appropriate responses could already be sloshing around in the brain and the best one could be reinforced very quickly. 

Linespace

Bitbucket  I think you're going further on that than Edelman himself would be inclined to do. In fact, I'll give you a prediction. Eventually, Edelman himself will come round to the view that there is nothing unique about all these processes, and that while the brain may not be literally a computer, its processes are computable. 

Linespace

Blandula I think not. You ought to remember what the man said himself about changes of heart - the unit of selection in successful theory creation is usually a dead scientist...

Linespace

BookRead:

"Bright Air, Brilliant Fire"
The most accessible account of the TNGS. Although Edelman's theory is very much a matter of neurons, the details are balanced with some philosophical and other general background and lightened with the occasional joke. 

"Neural Darwinism", "Topobiology", and "The Remembered Present"  
This trio together make up the authoritative account of Edelmanism. If you want the full version, this is it.
"A Universe of Consciousness" 
Written with Giulio Tononi. Same theory, narrower focus, different emphasis. Those who think qualia are the real issue, or have a particular interest in the binding problem might prefer this version.
 

"The Mystery of Consciousness" 
This collection of pieces by Searle about noteworthy conscious entities includes a friendly summary of Edelman, whose emphasis on biology is congenial to Searle.

"Wider than the Sky: The Phenomenal Gift of Consciousness" 
Edelman's latest book focuses on phenomenal experience, but it summarises a lot of his thinking in a relatively small space - perhaps a good place to start on Edelman or get a better idea of what he's on about.

LinksSome Links:
Short biography : from a conference site.
Page at Princeton - complete with mugshot of the youthful Edelman

Review of 'A Universe of Consciousness' - from the Guardian. Rather unenthusiastic, but some sound points.