Consciousness as Hardware?

Picture: hardware consciousness. Jan-Markus Schwindt put up a complex and curious argument against physicalism in a recent JCS: one of those discussions whose course and conclusion seem wildly wrong-headed, but which provoke interesting reflections along the way.

His first point, if I followed him correctly, was that physics basically comes down to maths. In the early days, scientists thought of themselves as doing sums about some basic stuff which was out there in the world; as the sums and the theories behind them got more sophisticated and comprehensive, there was less and less need for the stuff-in-itself; the equations were really providing the whole show. In the end, the inescapable conclusion is that the world described by physics is a mathematical structure.

Schwindt quotes several scientists along these lines, such as Eddington saying ‘We have chased the solid substance from the continuous liquid to the atom, from the atom to the electron, and there we have lost it’. There’s no doubt, of course that maths is indeed the language, or perhaps the soul, of physics. Hooke claimed that Newton had stolen the idea of the inverse square law of gravity from him; but Christopher Wren gently remarked that he too had thought of the idea independently; so had many others he could name; the point was that only Newton could do the maths which turned it into a verifiable theory. Thes days, of course, it’s notoriously hard to describe the ‘stuff’ that modern quantum physics is about in anything other than mathematical terms, and one respectable point of view is that there’s no point in trying.

But I don’t think physicalists are necessarily committed to the view that the world is mathematical to the core. In fact mathematicians have a noticeable tendency to be Platonists, believing that numbers and mathematical entities have a real existence in a world of their own. This is a form of dualism, and hence in opposition to physicalism, but very different to Schwindt’s point of view – instead of merging the physical into the mathematical, it keeps the precious formulae safe and unsullied in an eternal world of their own.

Moreover, at the risk of sounding like a bad philosopher, it all depends what you mean by mathematics. Numbers and formulae can be used in more than one way. n=1 is a testable statement in arithmetic; in many programming languages it’s true by fiat – it makes the variable n equal one. In Searlian terms, the direction of fit is different, or to put it less technically, in arithmetic it’s a statement, in programming an instruction. Now the kind of mathematical laws which hypothetically sustain the world would have to have the same direction of fit as the programming instruction: that is, it would be up to the world to resemble the law rather than up to the law to resemble the world. In fact they would require some more advanced metaphysical power than any mere high-level programming language; the sort of force which Adam’s words are supposed to have had in the garden of Eden, where his naming of a beast determined its very essence. If we could explain how apparently arbitrary laws of physics come to have this power over reality, it would be an unprecedented advance in metaphysics and the philosophy of science.

So when Schwindt tells us that the world is a mathematical structure, he may be right if he means mathematical in the sense of embodying a set of mysteriously potent ontological commands: but that’s just as complex and difficult as a world where the laws of physics are mere descriptions and the driving force of ontology is hidden away in ineffable stuff-in-itself. If, on the other hand, he means the world is reducible to a mathematical description, I think he’s mistaken – or at the least, he needs another argument to show how it could be.

For a second key point, Schwindt draws on an argument against functionalism proposed by Zuboff. This says that from a functionalist point of view, consciousness just consists of the firing of a certain pattern of neurons. But look, we could take out one neuron, put it in a dish somewhere, and then hand-feed the outputs and inputs between it and the rest of the brain in such a way that it functioned normally. Surely the relevant state of consciousness would be unaffected? If you’re happy with that, then you must accept that we could do the same with all the neurons in a brain. But if that’s so, we could put together a set of neurons which actually belong to different brains, but which together instantiate the right pattern for a particular experience, even though they don’t belong to any physically coherent individual? But surely this is absurd – so functionalism must be false.

Schwindt accepts this argument but gives it a different spin: for there to be patterns, he suggests, including patterns of neuron firing, there has to be an interpreter: someone who sees them as patterns. Since the world as described by physics is a mathematical structure, and since no mere mathematical structure can be an interpreter, consciousness requires something over and above the normal physical account. In fact, he proposes that consciousness is non-physical, and in effect the hardware which (if I’ve got this right) generates experience out of the mathematical structure of the physical world. This reasoning seems to lead naturally towards dualism, but Schwindt wants to stop just short of that: there seem to be two alternative reductions of the world on offer, he says, reasonably enough: the physicalist one and another which reduces everything to direct experience; but ultimately we can hope that there is some as yet unknown level on which monism is reasserted.

Alas, I think the Zuboffian argument (I haven’t read the original, so I rely on Schwindt’s account) fails because credible forms of functionalism don’t just require a pattern of neuron firing to exist: they require a relevant pattern of causal relations between neuron firing. As soon as hand-simulation enters the picture, all bets are off.

I think one reason Schwindt takes such an unlikely route is that he goes somewhat overboard in asserting the primacy of direct experience: he’s quite right that in one sense it’s all we’ve got; but like the idealists, he is in danger of mistaking an epistemological for an ontological primacy. It won’t come as any surprise, in any case, that I don’t really see how consciousness can be likened to hardware. Consciousness is content; arguably that’s all it is: whereas hardware is what gets left behind when you take the content out of a brain (or a computer). Isn’t it? Schwindt goes further and within consciousness has qualia playing a role analogous to a monitor, but I found that idea more confusing than illuminating.

Could consciousness be hardware? I can’t reject the idea: but that’s only because I don’t think I can properly grasp it to begin with.

3 thoughts on “Consciousness as Hardware?

  1. > they require a relevant pattern of causal relations
    > between neuron firing

    Why is that, do you think? What do causal relations add that make it magically produce consciousness?

  2. Sorry to be so slow responding. It’s a good question which is not easy to answer fully in a brief comment. I actually don’t think the Zuboffian argument works even for non-conscious entities. Take an ordinary computer: running a particular program involves going through a series of states. Suppose instead of taking these states in chronological sequence, we have a row of similar computers, each in one of the required states. Does that amount to a run of the program; can we see the separate machines as equivalent to a single one actually running the program? I’d say not, and I’d say that’s because there has to be causation of the right kind for these states to be linked in the required way.

    On the deeper issue of causality and consciousness, perhaps I can lazily refer to this old piece setting out some views of my own.

  3. “suppose instead of taking these states in chronological sequence, we have a row of similar computers, each in one of the required states. Does that amount to a run of the program; can we see the separate;machines as equivalent to a single one actually running the program? I’d say not, and I’d say that’s because there has to be causation of the right kind for these states to be linked in the required way.”

    As I had argued elsewhere, what if the program is run on the same computer, but the successive states, instead of being causally computed, are just transferred from a hard disk on to all the internal nodes successively? (We assume that the program has been run once before for the states to be recorded on to the disk in the first place.) A compute scientist would see no difference between normal program execution and just replaying the states (since the logic levels at all nodes are identical between the two cases), and doesn’t really care what goes on between the various elements connected between the internal nodes. But apparently, philosophers do seem to care. The question is, why?

    My own take on this is, there is really no difference between active one-hundred percent deterministic execution and replaying such a program trace. The operating word here is “deterministic”. Perhaps quantum effects are needed in creating consciousness, so replaying a trace (basically after the wavefunction has collapsed) automatically rules out consciousness.

    Perhaps quantum computers might be able to produce consciousness. And in that case, the above thought experiments don’t apply. It would be the difference between watching a play with real actors versus a movie.

Leave a Reply

Your email address will not be published. Required fields are marked *