Posts tagged ‘Computation’

Yes, Dennett has recanted. Alright, he hasn’t finally acknowledged that Jesus is his Lord and Saviour. He hasn’t declared that qualia are the real essence of consciousness after all. But his new book From Bacteria to Bach and Back does include a surprising change of heart.

The book is big and complex: to be honest it’s a bit of a ragbag (and a bit of a curate’s egg). I’ll give it the fuller review it deserves another time, but it seems to be worth addressing this interesting point separately. The recantation arises from a point on which Dennett has changed his mind once before. This is the question of homunculi. Homunculi are ‘little people’ and the term is traditionally used to criticise certain kinds of explanation, the kind that assume some module in the brain is just able to do everything a whole person could do. Those modules are effectively ‘little people in your head’, and they require just as much explanation as your brain did in the first place. At some stage many years ago, Dennett decided that homunculi were alright after all, on certain conditions. The way he thought it could work was an hierarchy of ever stupider homunculi. Your eyes deliver a picture to the visual homunculus, who sees it for you; but we don’t stop there; he delivers it to a whole group of further colleagues; line-recognising homunculi, colour-recognising homunculi, and so on. Somewhere down the line we get to an homunculus whose only job is to say whether a spot is white or not-white. At that point the function is fully computable and our explanation can be cashed out in entirely non-personal, non-mysterious, mechanical terms. So far so good, though we might argue that Dennett’s ever stupider routines are not actually homunculi in the correct sense of being complete people; they’re more like ‘black boxes’, perhaps, a stage of a process you can’t explain yet, but plan to analyse further.

Be that as it may, he now regrets taking that line. The reason is that he no longer believes that neurons work like computers! This means that even at the bottom level the reduction to pure computation doesn’t quite work. The reason for this remarkable change of heart is that Terrence Deacon and others have convinced Dennett that the nature of neurons as entities with metabolism and a lifecycle is actually relevant to the way they work. The fact that neurons, at some level, have needs and aims of their own may ground a kind of ‘nano-intentionality’ that provides a basis for human cognition.

The implications are large; if this is right then surely, computation alone cannot give rise to consciousness! You need metabolism and perhaps other stuff. That Dennett should be signing up to this is remarkable, and of course he has a get-out. This is that we could still get computer consciousness by simulating an entire brain and reproducing every quirk of every neuron. For now that is well beyond our reach – and it may always be, though Dennett speaks with misplaced optimism about Blue Brain and other projects. In fact I don’t think the get-out works even on a theoretical level; simulations always leave out some aspect of the thing simulated, and if this biological view is sound, we can never be sure that we haven’t left out something important.

But even if we allow the get-out to stand this is a startling change, and I’ve been surprised to see that no review of the book I’ve seen even acknowledges it. Does Dennett himself even appreciate quite how large the implications are? It doesn’t really look as if he does. I would guess he thinks of the change as merely taking him a bit closer to, say, the evolution-based perspective of Ruth Millikan, not at all an uncongenial direction for him. I think, however, that he’s got more work to do. He says:

The brain is certainly not a digital computer running binary code, but it is still a kind of computer…

Later on, however, he rehashes the absurd but surely digitally-computational view he put forward in Consciousness Explained:

You can simulate a virtual serial machine on a parallel architecture, and that’s what the brain does… and virtual parallel machines can be implemented on serial machines…

This looks pretty hopeless in itself, by the way. You can do those things if you don’t mind doing something really egregiously futile. You want to ‘simulate’ a serial machine on a parallel architecture? Just don’t use more than one of its processors. The fact is, parallel and serial computing do exactly the same job, run the same algorithms, and deliver the same results. Parallel processing by computers is just a practical engineering tactic, of no philosophical interest whatever. When people talk about the brain doing parallel processing they are talking about a completely different and much vaguer idea and often confusing themselves in the process. Why on earth does Dennett think the brain is simulating serial processing on a parallel architecture,  a textbook example of pointlessness?

It is true that the brain’s architecture is massively parallel… but many of the brain’s most spectacular activities are (roughly) serial, in the so-called stream of consciousness, in which ideas, or concepts or thoughts float by not quite in single file, but through a Von Neumann bottleneck of sorts…

It seems that Dennett supposes that only serial processing can deliver a serially coherent stream of consciousness, but that is just untrue. On display here too is Dennett’s bad habit of using ‘Von Neumann’ as a synonym for ‘serial’. As I understand it the term “Von Neumann Architecture” actually relates to a long-gone rivalry between very early computer designs. Historically the Von Neumann design used the same storage for programs and data, while the more tidy-minded Harvard Architecture provided separate storage. The competition was resolved in Von Neumann’s favour long ago and is as dead as a doornail. It simply has no relevance to the human brain: does the brain have a Von Neumann or Harvard architecture? The only tenable answer is ‘no’.

Anyway, whatever you may think of that, if Dennett now says the brain is not a digital computer, he just cannot go on saying it has a Von Neumann architecture or simulates a serial processor. Simple consistency requires him to drop all that now – and a good thing too. Dennett has to find a way of explaining the stream of consciousness that doesn’t rely on concepts from digital computing. If he’s up for it, we might get something really interesting – but retreat to the comfort zone must look awfully appealing at this stage. There is, of course, nothing shameful in changing your mind; if only he can work through the implications a bit more thoroughly, Dennett will deserve a lot of credit for doing so.

More another time.

no botsI liked this account by Bobby Azarian of why digital computation can’t do consciousness. It has several virtues; it’s clear, identifies the right issues and is honest about what we don’t know (rather than passing off the author’s own speculations as the obvious truth or the emerging orthodoxy). Also, remarkably, I almost completely agree with it.

Azarian starts off well by suggesting that lack of intentionality is a key issue. Computers don’t have intentions and don’t deal in meanings, though some put up a good pretence in special conditions.  Azarian takes a Searlian line by relating the lack of intentionality to the maxim that you can’t get meaning-related semantics from mere rule-bound syntax. Shuffling digital data is all computers do, and that can never lead to semantics (or any other form of meaning or intentionality). He cites Searle’s celebrated Chinese Room argument (actually a thought experiment) in which a man given a set of rules that allow him to provide answers to questions in Chinese does not thereby come to understand Chinese. But, the argument goes, if the man, by following rules, cannot gain understanding, then a computer can’t either. Azarian mentions one of the objections Searle himself first named, the ‘systems response’: this says that the man doesn’t understand, but a system composed of him and his apparatus, does. Searle really only offered rhetoric against this objection, and in my view it is essentially correct. The answers the Chinese Room gives are not answers from the man, so why should his lack of understanding show anything?

Still, although I think the Chinese Room fails, I think the conclusion it was meant to establish – no semantics from syntax – turns out to be correct, so I’m still with Azarian. He moves on to make another  Searlian point; simulation is not duplication. Searle pointed out that nobody gets wet from digitally simulated rain, and hence simulating a brain on a computer should not be expected to produce consciousness. Azarian gives some good examples.

The underlying point here, I would say, is that a simulation always seeks to reproduce some properties of the thing simulated, and drops others which are not relevant for the purposes of the simulation. Simulations are selective and ontologically smaller than the thing simulated – which, by the way, is why Nick Bostrom’s idea of indefinitely nested world simulations doesn’t work. The same thing can however be simulated in different ways depending on what the simulation is for. If I get a computer to simulate me doing arithmetic by calculating, then I get the correct result. If it simulates me doing arithmetic by operating a humanoid writing random characters on a board with chalk, it doesn’t – although the latter kind of simulation might be best if I were putting on a play. It follows that Searle isn’t necessarily exactly right, even about the rain. If my rain simulation program turns on sprinklers at the right stage of a dramatic performance, then that kind of simulation will certainly make people wet.

Searle’s real point, of course, is really that the properties a computer has in itself, of running sets of rules, are not the relevant ones for consciousness, and Searle hypothesises that the required properties are biological ones we have yet to identify. This general view, endorsed by Azarian, is roughly correct, I think. But it’s still plausibly deniable. What kind of properties does a conscious mind need? Alright we don’t know, but might not information processing be relevant? It looks to a lot of people as if it might be, in which case that’s what we should need for consciousness in an effective brain simulator. And what properties does a digital computer, in itself have – the property of doing information processing? Booyah! So maybe we even need to look again at whether we can get semantics from syntax. Maybe in some sense semantic operations can underpin processes which transcend mere semantics?

Unless you accept Roger Penrose’s proof that human thinking is not algorithmic (it seems to have drifted off the radar in recent years) this means we’re still really left with a contest of intuitions, at least until we find out for sure what the magic missing ingredient for consciousness is. My intuitions are with Azarian, partly because the history of failure with strong AI looks to me very like a history of running up against the inadequacy of algorithms. But I reckon I can go further and say what the missing element is. The point is that consciousness is not computation, it’s recognition. Humans have taken recognition to a new level where we recognise not just items of food or danger, but general entities, concepts, processes, future contingencies, logical connections, and even philosophical ontologies. The process of moving from recognised entity to recognised entity by recognising the links between them is exactly the process of thought. But recognition, in us, does not work by comparing items with an existing list, as an algorithm might do; it works by throwing a mass of potential patterns at reality and seeing what sticks. Until something works, we can’t tell what are patterns at all; the locks create their own keys.

It follows that consciousness is not essentially computational (I still wonder whether computation might not subserve the process at some level). But now I’m doing what I praised Azarian for avoiding, and presenting my own speculations…

OutputMassimo Pigliucci issued a spirited counterblast to computationalism recently, which I picked up on MLU. He says that people too often read the Turing-Church hypothesis as if it said that a Universal Turing Machine could do anything that any machine could do. They then take that as a basis on which to help themselves to computationalism. He quotes Jack Copeland as saying that a myth has arisen on the matter, and citing examples where he feels that Dennett and the Copelands have mis-stated the position. Actually, says Pigliucci, Turing merely tells us that a Universal Turing Machine can do anything a specific Turing machine can do, and that does not tell us what real-world machines can or can’t do.

It’s possible some nits are being picked here.  Copeland’s reported view seems a trifle too puritanical in its refusal to look at wider implications; I think Turing himself would have been surprised to hear that his work told us nothing about the potential capacities of real world digital computers. But of course Pigliucci is quite right that it doesn’t establish that the brain is computational. Indeed, Turing’s main point was arguably about the limits of computation, showing that there are problems that cannot be handled computationally. It’s sort of part of our bedrock understanding of computation that there are many non-computable problems; apart from the original halting problem the tiling problem may be the most familiar. Tiling problems are associated with the ingenious work of Roger Penrose, and he, of course, published many years ago now what he claims is a proof that when mathematicians are thinking original mathematical thoughts they are not computing.

So really Pigliucci’s moderate conclusion that computationalism remains an open issue ought to be uncontroversial? Surely no-one really supposes that the debate is over? Strangely enough there does seem to have been a bit of a revival in hard-line computationalism. Pigliucci goes on to look at pancomputationalism, the view that every natural process is instantiating a computation (or even all possible computations. This is rather like the view John Searle once proposed, that a window can be seen as a computer because it has two states, open and closed, which are enough to express a stream of binary digits. I don’t propose to go into that in any detail, except to say I think I broadly agree with Pigliucci that it requires an excessively liberal use of interpretation. In particular, I think in order to interpret everything as a computation, we generally have to allow ourselves to interpret the same physical state of the object as different computational states at different times, and that’s not really legitimate. If I can do that I can interpret myself into being a wizard, because I’m free to interpret my physical self as human at one time, a dragon at another, and a fluffy pink bunny at a third.

But without being pancomputationalists we might wonder why the limits of computation don’t hit us in the face more often. The world is full of non-computable problems, but they rarely seem to give us much difficulty. Why is that? One answer might be in the amusing argument put by Ray Kurzweil in his book How to Create a mind. Kurzweil espouses a doctrine called the “Universality of Computation” which he glosses as ‘the concept that a general-purpose computer can implement any algorithm”. I wonder whether that would attract a look of magisterial disapproval from Jack Copeland? Anyway, Kurzweil describes a non-computable problem known as the ‘busy beaver’ problem. The task here is to work out for a given value of n what the maximum number of ones written by any Turing machine with n states will be. The problem is uncomputable in general because as the computer (a Universal Turing Machine) works through the simulation of all the machines with n states, it runs into some that get stuck in a loop and don’t halt.

So, says Kurzweil, an example of the terrible weakness of computers when set against the human mind? Yet for many values of n it happens that the problem is solvable, and as a matter of fact computers have solved many such particular cases – many more than have actually been solved by unaided human thought! I think Turing would have liked that; it resembles points he made in his famous 1950 essay on Computing Machinery and Intelligence.

Standing aside from the fray a little, the thing that really strikes me is that the argument seems such a blast from the past. This kind of thing was chewed over with great energy twenty or even thirty years ago, and in some respects it doesn’t seem as important as it used to. I doubt whether consciousness is purely computational, but it may well be subserved, or be capable of being subserved, by computational processes in important ways. When we finally get an artificial consciousness, it wouldn’t surprise me if the heavy lifting is done by computational modules which either relate in a non-computational way or rely on non-computational processing, perhaps in pattern recognition, though Kurzweil would surely hate the idea that that key process might not be computed. I doubt whether the proud inventor on that happy day will be very concerned with the question of whether his machine is computational or not.