Is intentionality non-computable?

Picture: tiles. I undertook to flesh out my intuitive feeling that intentionality might in fact be a non-computable matter. It is a feeling rather than an argument, but let me set it out as clearly (and appealingly) as I can.

First, what do I mean by non-computability? I’m talking about the standard, uncontroversial variety of non-computability exhibited by the halting problem and various forms of tiling problem. The halting problem concerns computer programs. If we think about all possible programs, some run for a while and stop, while others run on forever (if for example there’s a loop which causes the program to run through the same instructions over and over again indefinitely). The question is, is there any computational procedure which can tell which is which? Is there a program which, when given any other program, can tell whether it halts or not? The answer is no; it was famously proved by Turing that there is no such program, and that the problem is therefore undecidable or non-computable.

Some important qualifications should be mentioned. First, programs that stop can be identified computationally; you just have to run them and wait long enough. The problem arises with programs that don’t halt; there is no general procedure by which we can identify them all. However, second, it’s not the case that we can never identify a non-stopping program; some are obvious. Moreover, when we have identified a particular non-stopping program, we can write programs designed to spot that particular kind of non-stopping. I think this was the point Derek was making in his comment on the last point, when he asked for an example of a human solution that couldn’t be simulated by computer; there is indeed no example of human discovery that couldn’t be simulated – after the fact. But that’s a bit like the blind man who claims he can find his way round a strange town just as well as anyone else; he just needs to be shown round first. We can actually come up with programs that are pretty good at spotting non-stopping programs for practical purposes; but never one that spots them all.

Tiling problems are really an alternative way of looking at the same issue. The problem here is, given a certain set of tiles, can we cover a flat plane with them indefinitely without any gaps? The original tiles used for this kind of problem were square with different colours, and an additional requirement was that colours must be matched where tiles met. At first glance, it looks as though different sets of tiles would fall into two groups; those that don’t tile the plain at all, because the available combinations of colours can’t be made to match up satisfactorily without gaps; and those that tile it with a repeating pattern. But this is not the case; in fact there are sets of tiles which will tile the plane, but only in such a way that the pattern never repeats. The early sets of tiles with this property were rather complex, but later Roger Penrose devised a non-square set which consists of only two tiles.

The existence of such ‘non-periodic’ tilings is the fly in the ointment which essentially makes it impossible to come up with a general algorithm for deciding whether or not a given set of tiles will tile the plane. Again, we can spot some that clearly don’t, some that obviously do, and indeed some that demonstrably only do so non-periodically; but there is no general procedure that can deal with all cases.

I mentioned Roger Penrose; he, of course, has suggested that the mathematical insight or creativity which human beings use is provably a non-computable matter, and I believe it was the contemplation of how the human brain manages to spot whether a particular tricky set of tiles will tile the plane that led to this view (that’s not to say that human brains have an infallible ability tell whether sets of tiles tile the plane, or computations halt). Penrose suggests that mathematical creativity arises in some way from quantum interactions in microtubules; others disagree with his theory entirely, arguing, for example, that the brain just has a very large set of different good algorithms which when deployed flexibly or in combination look like something non-computational.

I should like to take a slightly different tack. Let’s consider the original frame problem. This was a problem for AI dealing with dynamic environments, where the position of objects, for example, might change. The program needed to keep track of things, so it needed to note when some factor had changed. It turned out, however, that it also needed to note all the things that hadn’t changed, and the list of things to be noted at every moment could rapidly become unmanageable. Daniel Dennett, perhaps unintentionally, generalised this into a broader problem where a robot was paralysed by the combinatorial explosion of things to consider or to rule out at every step.

Aren’t these problems in essence a matter of knowing when to stop, of being able to dismiss whole regions of possibility as irrelevant? Could we perhaps say the same of another notorious problem of cognitive science – Quine’s famous problem of the impossibility of radical translation. We can never be sure what the word ‘Gavagai’ means, because the list of possible interpretations goes on forever. Yes, some of the interpretations are obviously absurd – but how do we know that? Isn’t this, again, a question of somehow knowing when to stop, of being able to see that the process of considering whether ‘Gavagai’ means ‘rabbit or more than two mice’, ‘rabbit or more than three mice’ and so on isn’t suddenly going to become interesting.

Quine’s problem bears fairly directly on the problem of meaning, since the ability to see the meaning of a foreign word is not fundamentally different from the ability to see the meaning of words per se. And it seems to me a general property of intentionality, that to deal with it we have to know when to stop. When I point, the approximate line from my finger sweeps out an indefinitely large volume of space, and in principle anything in there could be what I mean; but we immediately pick out the salient object, beyond which we can tell the exploration isn’t going anywhere worth visiting.

The suggestion I wanted to clarify, then, is that the same sort of ability to see where things are going underlies both our creative capacity to spot instances of programs that don’t halt, or sets of tiles that cover the plane, and our ability to divine meanings and deal with intentionality. This would explain why computers have never been able to surmount their problems in this area and remain in essence as stolidly indifferent to real meaning as machines that never manipulated symbols.

Once again, I’m not suggesting that humans are infallible in dealing with meaning, nor that algorithms are useless. By providing scripts and sets of assumptions, we can improve the performance of AI in variable circumstances; by checking the other words in a piece of text, we can improve the ability of programs to do translation. But even if we could bring their performance up to a level where it superficially matched that of human beings, it seems there would still be radically different processes at work, processes that look non-computational.

Such is my feeling, at least; I certainly have no proof and no idea how the issue could even be formalised in a way that rendered it susceptible to proof. I suppose being difficult to formalise rather goes with the territory.

Whole Brain Emulation

Picture: Brains. Robots.net recently featured the Whole Brain Emulation Roadmap (pdf) produced by the Future of Humanity Institute at Oxford University. The Future of Humanity Institute has a transhumanist tinge which I find slightly off-putting, and it does seem to include fiction among its inspirations, but the Roadmap is a thorough and serious piece of work, setting out in summary at least the issues that would need to be addressed in building a computer simulation of an entire human brain. Curiously, it does not include any explicit consideration of the Blue Brain project, even in an appendix on past work in the area, although three papers by Markram, including one describing the project, are cited.

One interesting question considered is: how low do you go? How much detail does a simulation need to have? Is it good enough to model brain modules (whatever they might be), neuronal groups of one kind of another, neurons themselves, neurotransmitters, quantum interactions in microtubules? The roadmap introduces the useful idea of scale separation; there might be one or more levels where there is a cut-off, and a simulation in terms of higher level entities does not need to be analysed any further. Your car depends on interactions at a molecular level, but in order to understand and simulate it we don’t need to go below the level of pistons, cylinders, etc. Are there any cut-offs of this kind in the brain? The road map is not meant to offer answers, but I think after reading it one is inclined to think that there is probably a cut-off somewhere below neuronal level; you probably need to know about different kinds of neurotransmitters, but probably don’t need to track individual molecules. SOmething like this seems to have been the level which the Blue Brain settled on.

The roadmap merely mentions some of the philosophical issues. It clearly has in mind the uploading of an individual consciousness into a computer, or the enhancement or extension of a biological brain by adding silicon chips, so an issue of some importance is whether personal identity could be preserved across this kind of change. If we made a compter copy of Stephen Hawking’s brain at the moment of his death, would that be Stephen Hawking?

The usual problem in discussions of this issue is that it is easy to imagine two parallel scenarios; one in which Hawking dies at the moment of transition (perhaps the destruction of his brain is part of the process), and one in which the exact same simulation is created while he continues his normal life. In the first case, we might be inclined to think that the simulation was a continuation, in the latter case it’s more difficult; yet the simulation in both cases is the same. My inclination is to think that the assertion of continuing identity in the first case is loose; we may choose to call it Hawking, but even if we do, we have to accept that it’s Hawking put through a radical alteration.

Of course, even if the simulation hasn’t got Hawking’s personal identity, having a simulation of his brain (or even one which was only 80% faithful) woud be a fairly awesome thing.

The roadmap provides a useful list of assumptions. One of these is:

Computability: brain activity is Turing?computable, or if it is uncomputable, the uncomputable aspects have no functionally relevant effects on actual behaviour.

I’ve come to doubt that this is probable. I cannot present a rigorous case, but in sloppy impressionistic terms the problem is as follows. Non-computable problems like the halting problem or the tiling problem seem intuitively to involve processes which when tackled computationally go on forever without resolution. Human thought is able to deal with these issues by being able to ‘see where things are going’ without pursuing the process to the end.

Now it seems to me that the process of recognising meanings is very likely a matter of ‘seeing where things are going’ in much the same way. Computers don’t deal with meaning at all, although there are cunning ploys to get round this in the various areas where it arises. The problem may well be that meanings are indefinitely ambiguous; there are always some more possible readings to be eliminated, and this might be why meaning is so untractable by computation.

Of course, apart from the hand-waving vagueness of that line of thought, it leaves me with the problem of explaining how the problem would manifest itself in the construction of a whole brain simulation; there would presumably have to be some properties of a biological brain which could never be accurately captured by a computational simulation. There are no doubt some fine details of the brain which could never be captured with perfect accuracy, but given the concept of scale separation,it’s hard to see how that alone would be a fatal problem.

When a whole brain simulation is actually attempted, the answer will presumably emerge; alas, according to the estimates in the road map, I may not live to see it.