A Third Wave?

waveAn article in the Chronicle of Higher Education (via the always-excellent Mind Hacks) argues cogently that as a new torrent of data about the brain looms, we need to ensure that it is balanced by a corresponding development in theory. That must surely be right: but I wonder whether the torrent of new information is going to bring about another change in paradigm, as the advent of computers in the twentieth century surely did?

We have mentioned before the two giant projects which aim to map and even simulate the neural structure of the brain, one in America, one in Europe. Other projects elsewhere and steady advances in technology seem to indicate that the progress of empirical neuroscience, already impressive, is likely to accelerate massively in coming years.

The paper points out that at present, in spite of enormous advances, we still know relatively little about the varied types of neurons and what they do; and much of what we think we do know is vague, tentative, and possibly misleading. Soon, however, ‘there will be exabytes (billions of gigabytes) of data, detailing what vast numbers of neurons do, in real time’.

The authors rightly suggest that data alone is no good without theoretical insights: they fear that at present there may be structural issues which lead to pure experimental work being funded while theory, in spite of being cheaper, is neglected or has to tag along as best it can. The study of the mind is an exceptionally interdisciplinary business, and they justifiably say research needs to welcome ‘mathematicians, engineers, computer scientists, cognitive psychologists, and anthropologists into the fold’. No philosophers in the list, I notice, although the authors quote Ned Block approvingly. (Certainly no novelists, although if we’re studying consciousness the greatest corpus of exploratory material is arguably in literature rather than science. Perhaps that’s asking a bit too much at this stage: grants are not going to be given to allow neurologists to read Henry as well as William James, amusing though that might be.)

I wonder if we’re about to see a big sea change; a Third Wave? There’s no doubt in my mind that the arrival of practical computers in the twentieth century had a vast intellectual impact. Until then philosophy of mind had not paid all that much attention to consciousness. Free Will, of course, had been debated for centuries, and personal identity was also a regular topic; but consciousness per se and qualia in particular did not seem to be that important until – I think – the seventies or eighties when a wide range of people began to have actual experience of computers. Locke was perhaps the first person to set out a version of the inverted spectrum argument, in which the blue in your mind is the same as the yellow in mine, and vice versa; but far from its being a key issue he mentions it only to dismiss it: we all call the same real world colours by the same names, so it’s a matter of no importance. Qualia? Of no philosophical interest.

I think the thing is that until computers actually appeared it was easy to assume, like Leibniz, that they could only be like mills: turning wheels, moving parts, nothing there that resembles a mind. When people could actually see a computer producing its results, they realised that there was actually the same kind of incomprehensible spookiness about it as there was in the case of human thought; maybe not exactly the same mystery, but a pseudo-magic quality far above the readily-comprehensible functioning of a mill. As a result, human thought no longer looked so unique and we needed something to stand in as the criterion which separated machines from people. Our concept of consciousness got reshaped and promoted to play that role, and a Second Wave of thought about the mind rolled in, making qualia and anything else that seemed uniquely human of special concern.

That wave included another change, though, more subtle but very important. In the past, the answer to questions about the mind had clearly been a matter of philosophy, or psychology; at any rate an academic issue. We were looking for a heavy tome containing a theory. Once computers came along, it turned out that we might be looking for a robot instead. The issues became a matter of technology, not pure theory. The unexpected result was that new issues revealed themselves and came to the fore. The descriptive theories of the past were all very well, but now we realised that if we wanted to make a conscious machine, they didn’t offer much help. A good example appears in Dan Dennett’s paper on cognitive wheels, which sets out a version of the Frame Problem. Dennett describes the problem, and then points out that although it is a problem for robots, it’s just as mysterious for human cognition; actually a deep problem about the human mind which had never been discussed; it’s just that until we tried to build robots we never noticed it. Most philosophical theories still have this quality, I’m afraid, even Dennett’s: OK, so I’m here with my soldering iron or my keyboard: how do I make a machine that adopts the intentional stance? No clue.

For the last sixty years or so I should say that the project of artificial intelligence has set the agenda and provided new illumination in this kind of way. Now it may be that neurology is at last about to inherit the throne.  If so, what new transformations can we expect? First I would think that the old-fashioned computational robots are likely to fall back further and that simulations, probably using neural network approaches, are likely to come to the fore. Grand Union theories, which provide coherent accounts from genetics through neurology to behaviour, are going to become more common, and build a bridgehead for evolutionary theories to make more of an impact on ideas about consciousness.  However, a lot of things we thought we knew about neurons are going to turn out to be wrong, and there will be new things we never spotted that will change the way we think about the brain. I would place a small bet that the idea of the connectome will look dusty and irrelevant within a few years, and that it will turn out that neurons don’t work quite the way we thought.

Above all though, the tide will surely turn for consciousness. Since about 1950 the game has been about showing what, if anything, was different about human beings; why they were not just machines (or why they were), and what was unique about human consciousness. In the coming decade I think it will all be about how consciousness is really the same as many other mental processes. Consciousness may begin to seem less important, or at any rate it may increasingly be seen as on a continuuum with the brain activity of other animals; really just a special case of the perfectly normal faculty of…  Well, I don’t actually know what, but I look forward to finding out.

8 thoughts on “A Third Wave?

  1. Ah, I thought it sounded like Gary Marcus (will have to get myself the book he’s promoting…).
    Few random thoughts:
    1) In a comment to the original article, CAYdenberg says:

    Figuring out how [DNA-RNA] transcription works required a lot of study of a few, really simple systems. Too much data would provide a lot of opportunity for cheap, risk-free papers that advanced the field exactly nowhere. I cannot fathom how anyone thinks that the human brain a good place to begin to understand how thinking works.

    And I think he is right: as the main article proposes, a new wave of ‘cool’ research fueled with petabytes of data and no credible theory will keep lots of neuroscientists employed, but will yield scarce results. We may see a change in paradigm towards simulations, network analysis and the like, but it will all eventually evaporate unless someone will find a convincing solution to the “computational gap” you mentioned in an early post (which would count as the “credible theory” in my book).

    2) The neat part of finding a credible (algorithmic) theory would be that it would make it possible to instantiate the same algorithms in a virtually infinite number of different ways, without having to rely on (likely to be very computationally inefficient) neural network simulations. That’s also the downside, as AI will jump forward, and I fear it will become possible to create a genuine general purpose AI. If that’s true, we should be very afraid.

    3) On your 6th paragraph and conclusion: I would very much like if the unjustified and presumptuous anthropocentrism of philosophy of mind (and psychology) would get killed by this third-wave (with or without a decent accompanying theory), at least something good would come out of it. I find it hard to understand how some scholars just assume that consciousness (and cognition) is (are) specifically human, or at best only marginally relevant for great apes.

    4) On the conclusion: “Consciousness may begin to seem less important, or at any rate it may increasingly be seen as on a continuuum with the brain activity of other animals; really just a special case of the perfectly normal faculty of…” well, this won’t happen if we will just rely on fishing for correlations in big-data without a solid theory, right? So perhaps material for a fourth wave?

  2. “When people could actually see a computer producing its results, they realised that there was actually the same kind of incomprehensible spookiness about it as there was in the case of human thought; maybe not exactly the same mystery, but a pseudo-magic quality far above the readily-comprehensible functioning of a mill.”

    I’m not sure I understand what’s spooky about a computer. After all a system of pulleys and gears could also emulate a Turing Machine.

    That said, if you’re suggesting the black box nature of a computer is what led to the varied confusions that have bolstered computationalism I’d definitely agree.

  3. I think it’s an interesting historical framing of the situation – even more so that what would make us perhaps relate to computers is an absence of information/mystery, rather than a presence of information.

    OK, so I’m here with my soldering iron or my keyboard: how do I make a machine that adopts the intentional stance? No clue.

    What would be the sign it has adopted the intentional stance?

  4. @Peter: Regarding your mention of novelists, here’s an interesting perspective on Lovecraft from the conservative side of things:

    http://www.claremont.org/article/master-of-modern-horror/#.VGzC6oeJmit

    Ties in well with R Scott Bakker’s (Scott around here) older post ‘Neuroscience & Socio-Cognitive Pollution.’

    “The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.”

  5. Peter concludes:

    “Consciousness may begin to seem less important, or at any rate it may increasingly be seen as on a continuuum with the brain activity of other animals; really just a special case of the perfectly normal faculty of… Well, I don’t actually know what, but I look forward to finding out.”

    Actually, we have a good idea of what sorts of informational processes and capacities are associated with conscious experience, see for instance Dehaene and Changeux, “Experimental and Theoretical Approaches to Conscious Processing,” Neuron 70, 2011. It’s just that there is no received theory of why the existence of consciousness should be entailed by such capacities. Whatever the final theory, the subjective importance of consciousness will of course not diminish, since it’s what constitutes us as subjects.

    http://www.unicog.org/publications/DehaeneChangeux_ReviewConsciousness_Neuron2011.pdf

  6. Hi Tom, when I read the paper you refer, I remember that one thing that striked me was that the model relies much on “spontaneous activity”, and how this term is itself defined and handled.

    What do you think about it? Maybe noise (non intentional states) levels reaching a threshold triggering the firing (spontaneous ?) and then the activity is filter by subsequent mechanisms? go, no-go, kind of filter.

    Nothing is spontaneous… except for quantum void fluctuations, and even that is a philosophical problem. Any respectable theory should clearly identify the cause effect chain for any possible neural activity configuration, or not?

    Spontaneous!? these neuroscientists are so amusing.

  7. Vincente,

    Any respectable theory should clearly identify the cause effect chain for any possible neural activity configuration, or not?

    I think it’s clear there will be a period in history where AI cannot be made for lack of knowledge, and a period in history after that where (assuming it can be done) it can.

    Knowing that chain would let you reproduce it, clearly. In other words, make AI.

    Were at the historical period where we can use the benefits of that chain not being known yet to prepare for a time when perhaps that chain is as mapped out as the human genome is.

    To wait for the chain to be absolutely known before engaging the subject is to squander that time before.

    It seems the same as climate change denial to assume that chain mapping wont eventually occur.

Leave a Reply

Your email address will not be published. Required fields are marked *