gerald edelmanGerald Edelman has died, at the age of 84. He won his Nobel prize for work on the immune system, but we’ll remember him as the author of the Theory of Neuronal Group Selection (TNGS) or ‘Neural Darwinism’.

Edelman was prominent among those who emphasise the limits of computation: he denied that the brain was a computer and did not believe computers could ever become conscious…

In considering the brain as a Turing machine, we must confront the unsettling observations that, for a  brain, the proposed table of states and state transitions is unknown, the symbols on the input tape are ambiguous and have no preassigned meanings, and the transition rules, whatever they may be, are not consistently applied. Moreover inputs and outputs are not specified by a teacher or a programmer in real-world animals. It would appear that little or noting of value can be gained from the application of this failed analogy between the computer and the brain.

He was not averse to machines in general, however, and was happy to use robots for parts of his own research. He drew a distinction between perception, first-order consciousness, and higher-order consciousness; the first could be attained by machines we could build now; the second might very well be possible for machines of the right kind eventually – but there was much to be done before we could think of trying it. Even higher-order consciousness might be attainable by an artefactual machine in principle, but the prospect was so remote it was pointless to spend any time thinking about it.

There may seem to be a slight tension here: Turing machines are ruled out, but machines of another kind are ruled in. Yet the whole point of a Universal Turing Machine is that it can do anything that any machine can do?

For Edelman the point was that the brain required biological thinking, not just concepts from physics or engineering. In particular he advocated selective mechanisms like those in Darwinian evolution. Instead of running an algorithm, the brain offered up a vast range of neuronal arrays, some of which were reinforced and so survived to exert more influence subsequently. The analogy with Darwinian evolution is not precise, and Francis Crick famously said the whole thing could better be called ‘Neural Edelmanism’ (no-one so bitchy as a couple of Nobel prize-winners).

Edelman was in fact drawing on a different analogy, one with the immune system he understood so well. The human immune system has to react quickly to invading infections, synthesising antibodies to new molecules it has never encountered before; in fact it reacts just as effectively to artificial molecules synthesised in the lab, ones that never existed in nature. For a long time it was believed that the system somehow took an impression of the invaders’ chemistry and reproduced it; in fact what it does is develop a vast repertoire of variant molecules; when one of them happens to lock into an invader it then reproduces vigorously and produces more of itself to lock into other similar molecules.

This looks like a useful concept and I think Edelman was right to think it has a role to play in the brain: but working out quite how is another matter. Edelman himself built a novel idea of recategorisation based on the action of re-entrant loops; this part of the theory has not fared very well over the years. The NYT obituary quotes Gunther Stent who once said that as professor of molecular biology and chairman of the neurobiology section of the National Academy of Sciences, he should have understood Edelman’s theory – but didn’t.

At any rate, we can see that Edelman believed that when a conscious machine was built in the distant future it would be running a selective system of some kind; one that we could well call a machine in everyday terms, though not in the Turing sense. He just might be vindicated one day.

 

9 Comments

  1. 1. Jorge says:

    You’re absolutely right in pointing out that a UTM can emulate any machine, including a selection-driven neural network like the one Edelman proposes.

    It’s also worth pointing out that it seems to me (at least from a woefully inadequate layman’s reading) that modern advances in deep learning and Bayesian modeling are extremely analogous to what Edelman was proposing.

    The main issue here is that (to my knowledge) nowhere did Edelman offer a structure-function analysis of *why* re-entrance would lead to anything resembling our inner phenomenology.

  2. 2. Arnold Trehub says:

    I agree with Jorge. Any theory that purports to explain consciousness must show how subjectivity is a natural consequence of its biophysical properties.

  3. 3. Vicente says:

    Arnold,

    Yes, but even in your retinoid space model, the central properties are functional and rely on the system architecture (matrix) that supports information (signals/potencial propagation) flow, not really on the bio-physical processes, strictly speaking.

    I remember you referring to some research aiming to develop a retinoid space device.

    What are really the underpinning physiological processes indispensable for raising consciousness?

    Why if functional features are preserved, the biophysical layer couldn’t be replaced by electronics?

  4. 4. Arnold Trehub says:

    Vicente,

    We are not omniscient so all theoretical models are incomplete. The primitive components in the retinoid model are taken to be real neurons with at least the properties specified in the theory. The functional features of the retinoid system are critically dependent on the theoretically specified properties of neurons as well as their unknown properties. What is significant is that we can explain and predict previously inexplicable human conscious events on the basis of the limited structure and dynamics of the theoretical model. It remains to be seen if a non-neuronal artifact can exhibit behavior that would convince us that it was conscious, not just intelligent. See my comments on Research Gate about this.

  5. 5. Philosopher Eric says:

    Peter your Gerald Edelman page, though respectful, was also quite critical of his ideas. The more I consider it, the more I’m able to really appreciate how well you seem to understand that his approach was not all that productive. So now given your specific understandings, I also wonder if you’ll be able to see what is right (as I see it of course) with my own approach?

    I see Edelman’s work mainly as “just a bit of neurology/cognition.” I don’t actually care if something like “Theory of Neuronal Group Selection” helps the brain recognise objects in the world without having a huge inherited catalogue of patterns. If it proves useful for such scientists to think of us this way somewhat, then “great.” Similarly if “re-entrant connections” and “recategorisation” provide various logical answers, then I say “lovely.” (But I must nevertheless agree with your position that such things most certainly should not “explain consciousness.”)

    To now get to what I see as the real issue however, what in my opinion, is it that permits the human to consciously address “infinite” varieties of situation, without specific “programming” from which to do so? What, most essentially, is it that gives us our autonomy? As I see it the “engine” which drives consciousness is a punishment/reward dynamic, known publicly as “sensations,” though academically as “qualia.” All reality that has no “sensations” (as I define this term), exists without “relevance,” or “significance,” or “importance,” in a personal sense. But then with this “good/bad” feature, existence gains a fundamentally different kind of “self” dynamic from which to potentially function. Apparently evolution gave many varieties of life (and perhaps even “insects”) this consciousness/self dynamic, in order to facilitate autonomy.

    So then how might we effectively deal with “infinite varieties of circumstances,” given our sensations? Sensations input creates personal incentive for the conscious mind to build associated “scenarios” about reality, so that it can potentially figure out how best to proceed. Therefore I define “thought” as both “the interpretation of inputs” and “the construction of associated scenarios,” where scenarios of reality are built in order to figure out our sensation based interests. (As I learned in your recent “Measuring Consciousness” discussion, my own “interpretation of inputs” is kind of like Ned Block’s “phenomenal consciousness,” while “the construction of scenarios” is kind of like his “access consciousness.” Of course he doesn’t actually use my theory, though I do still enjoy his definition’s conformity with it.)

    I leave this here now for you and your readers to ponder and potentially question me about. In a recent email discussion with Sergio Graziosi (who contributed to the last discussion), he speculated that perhaps I only “stand upright” here (to use a pole in the ground analogy) because no one has actually made much of an attempt to topple me so far. And while I do believe that “this pole” does extend far into the earth, how might others be sure of this given that, for whatever the reason, few seem inclined to actually test my depth? I nevertheless hope to overcome this problem, and if discussion is too public here, emailed questions and comments will certainly be appreciated as well.

  6. 6. Peter says:

    Eric, much as we’ve enjoyed your contributions, I think if you really only want to discuss your own theories your own site might be the best place to do it?

  7. 7. Philosopher Eric says:

    I’ve had a full day to consider leaving Conscious Entities behind, and this seems to have put me into a depression that I simply did not expect. At the beginning my discussions gave me hope, though the more it became clear that others were not being swayed by my arguments, perhaps these positive sensations shifted to the equally positive (but vengeful) “theory of mind” sort? As an outside observer, Sergio Graziosi did recently inform me about this plight. He asked me to consider how many people must have been “right” (not that he concedes that I am) but nevertheless failed because they couldn’t properly convey their ideas? Yes I do think a great number. If I am ever to do something more than fail, an extreme change to my approach will be required. Fortunately he has agreed, for the moment at least, to mentor me.

    Goodbye “Conscious Entities.” While I can’t say that I leave it better than I found it, it is indeed impressive!

    Eric

  8. 8. Richard J.R.Miles says:

    I hope your mentor works for you Eric.

  9. 9. john davey says:

    A UTM is a universal model of a mathemateical machine, not a universal machine full stop. A turing machine wouldn’t function very well as a train, or as a bread maker. There are unlimited information ouputs from a turing machine but consciousness is a physical output of the brain, not information. When the physical causes of consciousness are created by a machine, artificial consciousness will follow. It follows that true artificial thought will be generated by a biologist, not a computer guru.

Leave a Reply