Yes, Dennett has recanted. Alright, he hasn’t finally acknowledged that Jesus is his Lord and Saviour. He hasn’t declared that qualia are the real essence of consciousness after all. But his new book From Bacteria to Bach and Back does include a surprising change of heart.

The book is big and complex: to be honest it’s a bit of a ragbag (and a bit of a curate’s egg). I’ll give it the fuller review it deserves another time, but it seems to be worth addressing this interesting point separately. The recantation arises from a point on which Dennett has changed his mind once before. This is the question of homunculi. Homunculi are ‘little people’ and the term is traditionally used to criticise certain kinds of explanation, the kind that assume some module in the brain is just able to do everything a whole person could do. Those modules are effectively ‘little people in your head’, and they require just as much explanation as your brain did in the first place. At some stage many years ago, Dennett decided that homunculi were alright after all, on certain conditions. The way he thought it could work was an hierarchy of ever stupider homunculi. Your eyes deliver a picture to the visual homunculus, who sees it for you; but we don’t stop there; he delivers it to a whole group of further colleagues; line-recognising homunculi, colour-recognising homunculi, and so on. Somewhere down the line we get to an homunculus whose only job is to say whether a spot is white or not-white. At that point the function is fully computable and our explanation can be cashed out in entirely non-personal, non-mysterious, mechanical terms. So far so good, though we might argue that Dennett’s ever stupider routines are not actually homunculi in the correct sense of being complete people; they’re more like ‘black boxes’, perhaps, a stage of a process you can’t explain yet, but plan to analyse further.

Be that as it may, he now regrets taking that line. The reason is that he no longer believes that neurons work like computers! This means that even at the bottom level the reduction to pure computation doesn’t quite work. The reason for this remarkable change of heart is that Terrence Deacon and others have convinced Dennett that the nature of neurons as entities with metabolism and a lifecycle is actually relevant to the way they work. The fact that neurons, at some level, have needs and aims of their own may ground a kind of ‘nano-intentionality’ that provides a basis for human cognition.

The implications are large; if this is right then surely, computation alone cannot give rise to consciousness! You need metabolism and perhaps other stuff. That Dennett should be signing up to this is remarkable, and of course he has a get-out. This is that we could still get computer consciousness by simulating an entire brain and reproducing every quirk of every neuron. For now that is well beyond our reach – and it may always be, though Dennett speaks with misplaced optimism about Blue Brain and other projects. In fact I don’t think the get-out works even on a theoretical level; simulations always leave out some aspect of the thing simulated, and if this biological view is sound, we can never be sure that we haven’t left out something important.

But even if we allow the get-out to stand this is a startling change, and I’ve been surprised to see that no review of the book I’ve seen even acknowledges it. Does Dennett himself even appreciate quite how large the implications are? It doesn’t really look as if he does. I would guess he thinks of the change as merely taking him a bit closer to, say, the evolution-based perspective of Ruth Millikan, not at all an uncongenial direction for him. I think, however, that he’s got more work to do. He says:

The brain is certainly not a digital computer running binary code, but it is still a kind of computer…

Later on, however, he rehashes the absurd but surely digitally-computational view he put forward in Consciousness Explained:

You can simulate a virtual serial machine on a parallel architecture, and that’s what the brain does… and virtual parallel machines can be implemented on serial machines…

This looks pretty hopeless in itself, by the way. You can do those things if you don’t mind doing something really egregiously futile. You want to ‘simulate’ a serial machine on a parallel architecture? Just don’t use more than one of its processors. The fact is, parallel and serial computing do exactly the same job, run the same algorithms, and deliver the same results. Parallel processing by computers is just a practical engineering tactic, of no philosophical interest whatever. When people talk about the brain doing parallel processing they are talking about a completely different and much vaguer idea and often confusing themselves in the process. Why on earth does Dennett think the brain is simulating serial processing on a parallel architecture,  a textbook example of pointlessness?

It is true that the brain’s architecture is massively parallel… but many of the brain’s most spectacular activities are (roughly) serial, in the so-called stream of consciousness, in which ideas, or concepts or thoughts float by not quite in single file, but through a Von Neumann bottleneck of sorts…

It seems that Dennett supposes that only serial processing can deliver a serially coherent stream of consciousness, but that is just untrue. On display here too is Dennett’s bad habit of using ‘Von Neumann’ as a synonym for ‘serial’. As I understand it the term “Von Neumann Architecture” actually relates to a long-gone rivalry between very early computer designs. Historically the Von Neumann design used the same storage for programs and data, while the more tidy-minded Harvard Architecture provided separate storage. The competition was resolved in Von Neumann’s favour long ago and is as dead as a doornail. It simply has no relevance to the human brain: does the brain have a Von Neumann or Harvard architecture? The only tenable answer is ‘no’.

Anyway, whatever you may think of that, if Dennett now says the brain is not a digital computer, he just cannot go on saying it has a Von Neumann architecture or simulates a serial processor. Simple consistency requires him to drop all that now – and a good thing too. Dennett has to find a way of explaining the stream of consciousness that doesn’t rely on concepts from digital computing. If he’s up for it, we might get something really interesting – but retreat to the comfort zone must look awfully appealing at this stage. There is, of course, nothing shameful in changing your mind; if only he can work through the implications a bit more thoroughly, Dennett will deserve a lot of credit for doing so.

More another time.

26 Comments

  1. 1. Luis Ferreira says:

    I would like to pick your sentence “simulations always leave out some aspect of the thing simulated, and if this biological view is sound, we can never be sure that we haven’t left out something important.” and change it to “explanations always leave out some aspect of the thing explained, and if this biological view is sound, we can never be sure that we haven’t left out something important.”. The underlying reasoning is the same. Will you never be fully satisfied with any explanation on this matter? I doubt you mean that. We are never sure, such is the way science works. The idea of a large scale simulation seems to me in no way affected by any of this.

  2. 2. Hunt says:

    I agree he’s probably invoking von Neumann incorrectly, probably just because the word “bottleneck” is descriptive in the sense that a massively parallel mechanism (the brain) is squeezed through an apparently single file process (the “stream of consciousness”).

    It’s not that he thinks the brain is parallel emulating a serial architecture, it’s that he thinks it’s parallel implementing a serial consciousness.

    Personally, I think this is still wrong. The only thing that actually appears to be somewhat serial is the active voice in our heads, and I’m not sure how significant that is.

  3. 3. vicp says:

    1+2=3 2+1=3 taken as identities or are the same but taken as linear operations they are not the same if implemented on a computational platform. Digital computers can be broken down into the homunculi of registers and bits and every bit is actually a clock. As a matter of fact in a simple digital computer there is a single crystal oscillator circuit that generates a master clock that clocks all of the registers or what this is is an artificial generation of ‘time’. Abductively the computer simulates behavior based reality by artificially generating time to the point that an AI or robot can trick us into believing it is an actual live behavioral consciousness. Better put the manifest image of the stream of natural biological consciousness or our stream of consciousness is time or the eventfulness which enables sensorimotor movement. Like color we create time.

  4. 4. Richard J.R.Miles says:

    After reading Dennetts Consciousness Explained (away) book, I felt that I must have missed the point of it. Although it was full of some good ‘stuff’, I was left feeling empty and that I had wasted my time. I was given a boost by hearing the rebuff by Max Bennet and Peter Hacker to Dan Dennett and John Searle, and was further cheered by a meeting with Peter Hacker. I struggled through Darwins Dangerous Ideas by Dennett, so am reluctant to bother with his new book. I will wait for your further review Peter which, like this one, I value.

  5. 5. Brain Molecule Marketing says:

    Big whop, a philosopher (a dead language) accepts medical facts. Frickin’ medieval ideas for the Stone Age brains.

    In fairness, the head of the NIDA, a brilliant brain scientist said – paraphrasing – We know free will is a cultural myth but let’s pretend it happens. Ugh

  6. 6. SelfAwarePatterns says:

    “computation alone cannot give rise to consciousness!”

    I think this depends on what we’re willing to consider “computation” and what we decide should be excluded from it. Every computational system has physical attributes. A computer can be built using hydraulics or mechanical switches. Personally, I think any system whose inputs, internal processes, and outputs are primarily signaling could be considered computational, a category that the brain certainly seems to fall into, but that’s admittedly a philosophical position.

    Certainly the biological agenda of each neuron has an effect on what it does, and given that it leads to things like neural plasticity, this is very much a feature the brain uses to its advantage. The neuron’s computation is uniquely influenced by its metabolism, genetics, and its precise shape and structure. This is different from the influence of a processor core’s support structures on its computation, an influence which is engineered to be as consistent as possible.

    But the question in my mind is whether the idiosyncrasies of individual neurons effectively cancel each other out. There are many distinct types of neurons, but across the millions and/or billions of each type, it seems like the variances could average out and the system effectively operate as though each neuron of that type were identical to the average.

    If not, then to model the system, each and every neuron’s properties down to the proteins, DNA, RNA, and lipids might have to be accounted for. This is possible in principle, but it skyrockets the difficulty. Only time will tell.

  7. 7. Jochen says:

    I remember reading Dennett’s review of Deacon’s “Incomplete Nature” a while ago, where it also seemed to me that he was backpedalling on a couple of issues, at least a little. I remember wondering whether that would stick, and apparently, at least some of it did. Good on him!

  8. 8. Callan S. says:

    What ‘aims’, as refered to, does a neuron have?

    If we are talking ‘aims’ like an amoeba has, that makes sense to me. But I would have thought that would affirm the homonculi idea rather than dispel it, as neurons are your base level homonculi. Sounds like Deacon spun an enticing story that returns aims and intentionality back to being actual things, just as long as they are really teeny tiny things. Deacon is used to using the term ‘aims’ has never attempted to break it down further because for his research it’s not needed. So he uses the term in an unquestioned way, but he does know more about neurons as creatures than Dennett does – so the credibility of the latter osmotically transfers to the former unquestioned terms, making them, the use of aims an intentionality, sound credible. Ie, technobabble, which works from taking a sense of credibility and spreading it around that sense to other things, rather than those other things earning credibility.

    On computers, it seems like just rejecting the brain being a computer because instead of being digital, it’s analog computation?

    Analog computers are quite possible – despite how few of us know how a computer works in mechanical terms, we build digital computers for the sake of simplicity – for the sake of us understanding. Analog computers are possible – but understanding them is much harder. Particularly if how much water the computer has drunk today will actually effect its output.

  9. 9. john davey says:

    Peter


    This looks pretty hopeless in itself, by the way. You can do those things if you don’t mind doing something really egregiously futile. You want to ‘simulate’ a serial machine on a parallel architecture? Just don’t use more than one of its processors. The fact is, parallel and serial computing do exactly the same job, run the same algorithms, and deliver the same results.

    The think about Dennett is, Peter, like most computationalist philosophers he doesn’t really understand computers at all, and never did. I’ve never met someone who actually worked with computers who viewed them as anything other than bits of metal, like car engineers would a car. Computationalist philosophers only have a fantasy relationship with computers, and feel free to idolize them.

    And of course, he has no idea how the brain works. Neither does anybody else, so in that way his ignorance isn’t unique. But fundamentally, his career has been carved out of stating that an organ that he can’t possibly know runs in the same way as a technology he didn’t really understand. Nice work if you can get it.

    JBD

  10. 10. Jayarava says:

    I must say it is refreshing to know that there is at least one person in the world who can be persuaded to change his opinion based on facts.

  11. 11. dmf says:

    “It seems that Dennett supposes that only serial processing can deliver a serially coherent stream of consciousness, but that is just untrue” there is this wider unwillingness to attend to the actual particulars of engineering/anatomy, leads people to say nonsense like “These facts can tempt us to say that plants remember.”
    http://philosophyofbrains.com/2017/02/22/remembering-plants.aspx

  12. 12. vicp says:

    John, The operative words are in Peter’s first paragraph “his new book”. This has little to do with science and philosophy, but more to do with the publications industry and the academia enterprise which Dennett also represents. To paraphrase President Eisenhower, beware of the educational industrial complex.

  13. 13. Paul Torek says:

    @Callan (#8),
    What is a formal definition of “computation” that will allow analog computation under its rubric? It seems that the standard definition always boils down to Turing machines and their equivalents (see Wikipedia on “Computation”, “Algorithm”, and “Effective method” for example).

  14. 14. Christophe Menant says:

    Take it at animal or human level, the difference with computing is that the former has intrinsic finality/teleology and the latter has only a derived one.
    Life is a living entity for herself. Computing is not a computing entity for itself.
    This brings the difference mind/computing as readable at the life/computing level which is easier to address and where we can consider a bio-intentionality in simple terms of internal constraint to satisfy
    Said differently, the simulation of life done with computation (parallel or serial) brings in only derived teleology. This forgets the key intrinsic teleology of life. Leaving out something important indeed…

  15. 15. David Duffy says:

    Dear Paul Torek

    see eg https://arxiv.org/pdf/1702.02980.pdf

    Vergis et al. (1986) reformulate the Extended Church-Turing in terms of analogue computers as: “any finite analogue computer can be simulated efficiently by a digital computer, in the sense that the
    time required by the digital computer to simulate the analogue computer is bounded by a polynomial function of the resources used by the analogue computer”. That is, finite analogue computers do not make infeasible problems feasible. Thus, finite analogue computers cannot tractably solve NP-complete problems.

    As I understand it, analogue hypercomputing requires availability of physical infinities eg infinitely divisible space and Newtonian four-body problems gives non-collision singularities.

    John Davey opined that “I’ve never met someone who actually worked with computers who viewed them as anything other than bits of metal…”. Well starting from Turing onwards to Kurzweil…

    As to Dennett’s uses of the terms serial and parallel, they are loose, nevertheless we get the idea when we consider estimates of the size and characteristics of human working memory (masking, interference, priming) attached to the talking bit.

  16. 16. Scott Bakker says:

    Dennett’s actually been talking about the nano intentionality stuff for a while: the surprise you feel now, Peter, was the *dismay* I felt reading his Edge.org piece from a couple of years back. I agree that he’s over-reliant on computational metaphors–he’s a relic of the GOFAI wars, after all–but you need to recall that ‘computation’ for him is an interpretative artifact. It always has been. Likewise, the question of what ‘biological computation’ consists in has always been an empirically open one. My guess is that he would shrug his shoulders and insist you’ve been misreading him all along, attributing commitments he does not possess. (He is a philosopher after all!) But I think you’re right to point out the problems this poses any *functionalism,* his own included, whether anchored in interpretativism or not.

    I’m not sure I follow the parallel/serial criticisms. I think you might be getting him wrong, here, taking his metaphors a little too literally, but then I have yet to read the book. The brain is awash in processing bottlenecks, thus the need to select information. Consciousness somehow involves selection from nonconscious alternatives, stabilization and global broadcast, and as it so happens, deliberative reasoning as experienced is largely serial. You can use computational metaphors to describe this, or not. The degree to which those metaphors impede our ability to solve the problems they find themselves applied to is a completely different point, of course. My fear is that your criticism might just boil down to fashion here.

    I plan on reading and reviewing the book myself for TPB, so I’ll have a better idea then.

  17. 17. Scott Bakker says:

    Jochen – “I remember reading Dennett’s review of Deacon’s “Incomplete Nature” a while ago, where it also seemed to me that he was backpedalling on a couple of issues, at least a little. I remember wondering whether that would stick, and apparently, at least some of it did. Good on him!”

    I was deeply puzzled by that review as well, particularly since Deacon fails at any point to engage interpretativist (let alone Dennettian) alternatives to his analyses in Incomplete Nature. I loved the book for many reasons, but it quite simply does not touch its primary theoretical antagonist. Dennett fails to mention this–something which mystified me at the time. But then I came across Fodor’s (atrocious) review of Incomplete Nature and realized that Dennett wasn’t so much defending Deacon as attacking Fodor. Check it out.

    Interpretativist positions really don’t provide any wriggle room when it comes to these debates. Anything Dennett says regarding intentionality that makes you think he’s a realist needs to be read through the lens of “Real Patterns.” Nano-intentionality is ‘real enough’ intentionality, which is to say, not intentional in any traditional metaphysical sense.

  18. 18. Peter says:

    Scott,
    It seems to me Dennett has been hopelessly confused about parallel processing and the Von Neumann architecture ever since Consciousness Explained; but now he’s saying the brain is not a digital computer this terminology is even more out of place.

    …deliberative reasoning as experienced is largely serial…

    You of all people are suggesting that our conscious experience simply tells us the truth about our mental processes?!

  19. 19. Jorge says:

    “he no longer believes that neurons work like computers”

    On the one hand- obviously. We know that in order to organize itself, there is competition among neuronal populations and other stuff that has no simple analogue in a traditional Turing complete computer. However, the issue isn’t and has never been that… the issue is (and you do address this Peter) Can a Computer Behave like Neurons? And the answer to that is yes. I do not think that a computer needs to model metabolism and every single aspect (maybe some aspects of myelin metabilism are important due to its effect on conductivity, but that has hardly been definitively established)

    Why?

    Speed.

    Our conscious experiences occur in a very definite time frame- we perceive ‘the present moment’ as a montage of the last few seconds. That requires very fast information processing in order to distinguish this “now”, from the “next now”. The only process occurring quickly enough inside our brain account for this rapid distinction (which, I might add, includes intentional distinctions) is neurotransmission. Action potentials and transmitter vesicle release. Although I do not think project like Blue Brain are particularly likely to succeed, I do think they are correct in believing that if you simulate all the synaptic strengths and map connection correctly, you can at least extract useful predictive information (this has already been done to a limited extent in visual subcircuits in Drosophila and mouse). I don’t know if we will ever have machines capable of simulating all neuronal connections and signals at once though, and that is what would be required to simulate consciousness inside a machine.

  20. 20. alma says:

    If we can simulate biological processes IN THEORY then we can’t jump to this conclusion too fast:

    “computation alone cannot give rise to consciousness”

    You have to prove that biological processes have some magic part that absolutely never will be simulated on computer.
    It is not enough to say that it practically can’t be simulated (e.g. at certain technical level, or with a certain specific technical solution).

    My opinion” For me it simply does not seem plausible that biology have that magic part: brains does something that no artificially made thing can mimic. Hard to imagine for me.

  21. 21. Scott Bakker says:

    Peter – “You of all people are suggesting that our conscious experience simply tells us the truth about our mental processes?!”

    Hey! I wasn’t talking about mental processes! Just because metacognitive reports are prone to deceive in philosophical contexts, doesn’t mean they fail to provide useful information.

    I’m the *one* guy with a theory on what can and cannot be trusted: I have control of my borders… 😉

  22. 22. VicP says:

    Certainly the “serialness” of the brain really reflects its “directiveness”. At the subconscious level our two eyes do not see the same object twice or cross eyed but create one object. Language itself when used to direct others has to have a direct focus so the Von Neuman, serial etc. is fairly moot. In the workplace we don’t set our clocks to different times or what would be the purpose of using them?

    I’m inclined to say Incomplete Nature is Deacon’s callout to the incomplete explanation of natural processes like metabolism. All science yields right now are rather simple mechanical descriptions that do not take into account all of the forces at the molecular level. Even the splitting and recombination of DNA can be described and mechanically-computationally manipulated by biologists without taking into account all of the actual physical forces at work at the molecular level. Even the words physicalism, computationalism and materialism still carry very mechanical baggage because they do not describe the forces at work; hence classical dualism gets invoked.

    Dennett like a lot of the respected scholarly emperors who dominate the landscape in this debate and the book publication genre do not get called out adequately to how they are constantly dressing this debate. Really Mr. Dennett, is it such news that you reached a conclusion that many of us reached years ago? You finally feel the draft?

  23. 25. Da FUQ says:

    I would love to have a concrete reason why brain is not computing and not just because neurons have nano intentionalities voodoo type of arguments

  24. 26. Jochen says:

    Da FUQ:

    I would love to have a concrete reason why brain is not computing and not just because neurons have nano intentionalities voodoo type of arguments

    Well, I’ve tried to lay out the reasons for that in the comments below this post, but the most basic reason is simply that computation isn’t an objective aspect of the world; rather, it’s a feature of how we describe it, and hence, it can’t serve as a foundation of theories of the mind, since it is itself ultimately mind-dependent. A system computes something if and only if it is conceived of as computing something.

    The basic relationship between what a computer computes and the system that implements this computation is the same as the relationship between a given system and a model of that system—for definiteness, say the solar system and an orrery. This relationship involves taking a system as standing for something else; hence, it’s a relationship of interpretation, of reference. Effectively, it’s the relationship between a symbol and its meaning—and that relationship just isn’t there without an interpreting mind. Just as we have to conceive of the little beads in the orrery as standing for the planets, we have to conceive of the computation’s states—whether represented by text output on a screen, graphics, or printouts—as standing for the logical states of the computation it performs. Otherwise, all we have are a bunch of pixels emitting differently-colored light.

    As an analogy, computation is a bit like monetary value: as long as everybody interprets a certain piece of paper as having a certain buying power, you can actually buy things with it; but if that interpretation is absent, it becomes simply worthless. Monetary value is not an inherent property of special pieces of paper. Likewise, computation—and related terms, like information, codes, and so on—is not an inherent property of physical systems; but physical systems can be interpreted as performing computations, just as they can be interpreted as having monetary value.

Leave a Reply