More not the same

Picture: brain not computer. Chris Chatham has gamely picked up on my remarks about his interesting earlier piece, which set out a number of significant differences between brains and computers. We are, I think, somewhere between 90 and 100 percent in agreement about most of this, but Chris has come back with a defence of some of the things I questioned. So it seems only right that I should respond in turn and acknowledge some overstatement on my part.

Chris points out that “processing speed” is a well-established psychometric construct. I must concede that this is true: the term is used to cover various sound and useful measures of speed of recognition and the like: so I was too sweeping when I said that ‘processing speed’ had no useful application to the brain. That said, this kind of measurement of humans is really a measurement of performance rather than directly of the speed of internal working, as it would be when applied to computers. Chris also mentions some other speed constraints in the brain – things like rate of firing of neurons, speed of transmission along nerves – which are closer to what ‘processing speed’ means in computers (though not that close, as he was saying in the first place). In passing, I wonder how much connection there is between the two kinds of speed constraint in humans? The speed of performance of a PC is strongly affected by its clock speed; but do variations in rates of neuron firing have a big influence on human performance? It seems intuitively plausible that in older people neurons might take slightly longer to get ready to fire again, and that this might make some contribution to longer reaction times and the like (I don’t know of any research on the subject), but otherwise I suspect differences in performance speed between individual humans arise from other factors.

In a nutshell, Chris is right when he says that Peter is probably uncomfortable with equating “sparse distributed representation” with “analog,”. To me it looks like a whole nother thing from what used to go on in analog computers, where a particular value might be represented by a particular level of current. The whole topic of mental representation is a scary one for me in any case: if Chris wanted a twelfth difference to add to his list, I think he could add Computers don’t really do representation. That may seem an odd thing to say about machines that are all about manipulating symbols; but nothing in a computer represents anything except in so far as a human or humans have deemed or designed it to represent something. Whereas the human brain somehow succeeds in representing things to itself, and then to other humans, and manages to confer representational qualities on noises, marks on paper – and computers. This surely remains one of the bogglingly strange abilities of the mind, and it’s unlikely the computer analogy is going to help much in understanding it.

I do accept that in some intuitive sense the brain can be described as ‘massively parallel’ – people who so describe it are trying to put their finger on a characteristic of the brain which is real and important . But the phrase is surely drawn from massively parallel computing, which really isn’t very brain-like. ‘Parallel’ is a good way of describing how different bits of a process can be shepherded in an orderly manner through different CPUs or computers and then reintegrated; in the brain, it looks more as if the processes are going off in all directions, constantly interfering with each other, and reaching no definite conclusion. How all this results in an ordered and integrated progression of conscious experience is of course another notorious boggler, which we may solve if we can get a better grasp of how this ‘massively parallel’ – I’d rather say ‘luxuriantly non-linear’ way of operating works. My fear is that use of the phrase ‘massively parallel’ risks deluding us into thinking we’ve got the gist already.

Whatever the answer there, Chris’s insightful remarks and the links he provided have certainly improved my grasp of the gist of things in a number of areas, however, for which I’m very grateful.

4 thoughts on “More not the same

  1. Computers do represent things, just not at the low level that you could see by talking about circuits. Rather, things are represented in software constructs, several levels above the hardware. Surely the same could be said about how the brain represents things, except for the lack of such a clear difference between hardware and software. And spike trains are indeed a most interesting blend of analog and digital.

  2. On the issue of processing speed, I don’t think it’s reasonable to make a comparison at this point. In a computer, we are certain of how information is coded and manipulated, so we can determine with precision the rate of manipulation. In brains, we don’t know precisely how information is coded or manipulated. We can be fairly certain that things like frequency of neurons firing have something to do with it, but we can also be fairly certain that that is only a part a more complex process that is not fully understood. Without knowing exactly how information is coded, we can’t tell the rate at which it is being manipulated in the brain.

  3. “Computers don’t really do representations” (as humans and animals do) is an interesting subject regarding the nature of life and consciousness.
    Representations in humans and in animals are built up by the organisms to satisfy their goals and constraints relatively to their nature (a mouse has a representation of a cat to satisfy a vital constraint). These representations are meaningful. We can say that they have meanings relatively to the constraints of the organism that generates them (stay alive for the mouse).
    A computer contains representations, but the computer has no constraint to satisfy relatively to its nature (no intrinsic constraint). We can say that representations have no meaning for computers. However, representations hosted by computers can be meaningful for the programmer/user (this post has some meaning for me).
    Can we imagine computers that have intrinsic constraints so they could manage meaningful information and representations like we humans do? Was Hal (2001: A space odyssey) capable of this ? Such questions bring us back to the ones relative to the nature of life and consciousness via subjects dealing with autonomy, free will, meaning generation and the nature of the self.

Leave a Reply

Your email address will not be published. Required fields are marked *