On the level about computers and minds

Picture: cogs. Ari N. Schulman had an interesting piece in the New Atlantis recently on Why Minds Are Not Like Computers. Very briefly his view is that the aims of the strong AI project have quietly become less ambitious over time. In particular, from aiming to find the algorithms which directly generate high-level consciousness in one fell swoop, researchers have turned to simulating lower level mechanisms and modules; in some cases they’ve gone to a further level and are attempting to simulate the brain at neuronal level. Who knows, they might end up trying to do it at molecular or subatomic level, he says, but the point is that at these low levels the game is already lost; even if the simulation runs, we still won’t understand what’s happening on the higher levels. If we have to go to the level of simulating individual neurons, the original claim that the brain works the way a computer does has implicitly been abandoned.

Schulman thinks that a misleading application of an analogy between computers and the mind is key to the problem. In computers we have the well-established set of layers which goes from high-level languages through machine code all the way down to physical transistors;  researchers assumed, he thinks, that they would in effect be able to reverse engineer the source code of the brain and come up with high-level scripts which explicated the mechanisms of consciousness; and that they would be able to do it simply by ensuring that the input-output relationships were reproduced without having to worry about whether the hidden inner mechanisms of their version were actually the same as those of nature. But it never happened, and as time goes by it seems less and less likely that those algorithms will ever be found; it looks as if the mind just isn’t like that after all.

Although there’s something recognisable about that, I’m not sure whether this is a completely accurate account historically. It is certainly true that the misplaced early optimism of Good Old Fashioned Artificial Intelligence is a thing of the past – but that’s hardly breaking news. I’m not sure that even in the most upbeat period people thought that they could do the entire mind in one go: even then they surely looked to start with simplified tasks and build single modules. It’s just that they thought these modules and tasks would be dealt with more quickly and easily than they really were. The recent emergence of projects like the Blue Brain, seeking to simulate the neuronal level working of the brain, are also less a sign of lack of confidence in AI and more a sign of growing confidence in the number-crunching power of the computers now available. I don’t think such projects are exactly typical of where things are at these days in any case.

Still, plausible claims?  Of course, working out the high-level code, or even recognising the general drift of a program, from looking only at the machine-code level is not at all an easy business, so the fact that looking at neurons for many years has not yet led us to a general theory of consciousness is not necessarily a sign that it never will.  Schulman does not, like Searle (whose views he discusses), take the view that something about the physical stuff of brains is essential; his objection to functionalism seems merely to be that it hasn’t worked yet. Perhaps we just need more patience.

We also need to be a little careful about the diagnosis.  There are actually different ways of dividing the whole business into levels; one is the programmer-facing way which Schulman mainly focuses on; another is the user-facing one. Here the bottom level is made up of meaningless symbols; somewhere in the middle is organised data; and at the top is meaningful information. Surely it’s here that the aim of AI researchers has been focussed; they expected consciousness, not as high-level program code, but as outputs which mean something to a human being, or which ‘make sense’ in the context of a task. Ultimately, for consciousness, the outputs have to make sense to the machine itself.  If some form of computationalism can deliver these results, I don’t think the absence of a high-level theory in the other sense would indicate philosophical failure.

Even if we do ultimately have to go beyond a narrow functionalist view, we need not abandon the overall quest.  We should perhaps hang on to the distinction between consciousness as computation and consciousness as computable. The idea that the mind actually is just the programs running in the brain may look less plausible than it did; the idea that programs running somewhere might sustain a mind might yet be getting a second wind. It might be too narrow a view of clocks to say that they are nothing more than cogs, springs, and other pieces of mechanism; the works don’t tell us what the essence of a clock is. But the ironmongery is all we need to make a clock, and perhaps we could make one that worked before we fully understood the principle of the escapement mechanism…?

6 thoughts on “On the level about computers and minds

  1. The entire AI enterprise rests on a spurious equivalence,
    between numbers (abstract objects we understand in our minds)
    and numerals (the physical things that computers crunch).
    See ‘Immaterial Aspects of Thought’ by Ross at
    http://www.jstor.org/pss/2026790

    To put it succinctly, strong AI is akin to a flight simulator flying.

    A similar argument is made by Oderburg in
    ‘Concepts, Dualism, and the Human Intellect’ at
    http://www.reading.ac.uk/AcaDepts/ld/Philos/dso/papers/Concepts,%20Dualism%20and%20Human%20Intellect.pdf

  2. That may be, but it would be a mistake to equate AI (especially “strong AI”) with anything that can be done with computers.

  3. I always thought that those arguments claiming that “a simulation of X is not X” are self-contradictory when applied to AI.

    – Say that we agree that a task X requires intelligence.
    – If we have a “simulated intelligence” that can solve X.
    – Then, the simulation is intelligent, because otherwise, if we say that the simulation is not intelligent, but just a simulation, then X didn’t require intelligence in the first place (because the simulation can solve X).

    if we apply that argument to each single task, then either a simulation can be intelligent, or there exists no single task that requires intelligence…

  4. Here are some of my thoughts:

    – You can create all of the computer simulations you want but it is akin to creating a camera which simulates the human eye. The AI simulations may simulate thought but they are still being created by human beings in the same manner that human beings are creating artificial eyes. If you took the AI software and integrated it into the human being as you would an artificial eye, you could enhance the human thought process in a qualia way.

    – All computer languages from the highest levels down become the “ghost in the machine”. Although the millions of machine level bits are “information”, the “information” in reality is something else?…They are timing states.

  5. VicP: (If I may) What is often called computer simulation would more properly be referred to as modeling. This is what Searle was talking about when he said you would never get wet from a simulated rainstorm. Besides being a representation of some reality, modeling is often used for prediction, which means it is inherently inaccurate.

    But suppose you use the computer to do your income tax and it says you have $1000 refund coming. You cannot take the printout to the bank and cash it or deposit it. It’s still modeling in that sense. But now it had better be accurate or you may hear from the auditors.

    On the other hand, if you are using a program from http://www.stamps.com and you print out $100, you can indeed spend it (at least at the Post Office). If you use online banking, you can cause real money to be moved around. That’s not modeling. And you can go to any of 1000’s of shopping websites and spend real money and have real items delivered to your door.

    Now, consider the engine control unit in your car. It measures the air temperature and pressure and lots of other stuff and computes exactly when the spark plug should fire to get the best horsepower. Inside the ECU computer, the measurements are simulated by numbers in the memory, just as was the case in modeling the rainstorm. But when the calculations are finished, the results are used to affect real world events. Your engine either runs properly or it coughs and dies.

    In the car’s ECU, the computed results are sent to a little magnet in the distributor which switches the spark plug. Devices with the function of that little magnet are known more generally as transducers. Well, it turns out that in your brain, some parts, such as the pituitary and the hypothalamus, also act as transducers. They receive neural impulses from various parts of the brain (simulations or models of computed functions) and in turn, produce various hormones and such chemicals which circulate in the blood and adjust various bodily functions, just as the ECU adjusted the spark plug.

    In other words, computers can do real things, just as your body can.

Leave a Reply

Your email address will not be published. Required fields are marked *