Is intentionality non-computable?

Picture: tiles. I undertook to flesh out my intuitive feeling that intentionality might in fact be a non-computable matter. It is a feeling rather than an argument, but let me set it out as clearly (and appealingly) as I can.

First, what do I mean by non-computability? I’m talking about the standard, uncontroversial variety of non-computability exhibited by the halting problem and various forms of tiling problem. The halting problem concerns computer programs. If we think about all possible programs, some run for a while and stop, while others run on forever (if for example there’s a loop which causes the program to run through the same instructions over and over again indefinitely). The question is, is there any computational procedure which can tell which is which? Is there a program which, when given any other program, can tell whether it halts or not? The answer is no; it was famously proved by Turing that there is no such program, and that the problem is therefore undecidable or non-computable.

Some important qualifications should be mentioned. First, programs that stop can be identified computationally; you just have to run them and wait long enough. The problem arises with programs that don’t halt; there is no general procedure by which we can identify them all. However, second, it’s not the case that we can never identify a non-stopping program; some are obvious. Moreover, when we have identified a particular non-stopping program, we can write programs designed to spot that particular kind of non-stopping. I think this was the point Derek was making in his comment on the last point, when he asked for an example of a human solution that couldn’t be simulated by computer; there is indeed no example of human discovery that couldn’t be simulated – after the fact. But that’s a bit like the blind man who claims he can find his way round a strange town just as well as anyone else; he just needs to be shown round first. We can actually come up with programs that are pretty good at spotting non-stopping programs for practical purposes; but never one that spots them all.

Tiling problems are really an alternative way of looking at the same issue. The problem here is, given a certain set of tiles, can we cover a flat plane with them indefinitely without any gaps? The original tiles used for this kind of problem were square with different colours, and an additional requirement was that colours must be matched where tiles met. At first glance, it looks as though different sets of tiles would fall into two groups; those that don’t tile the plain at all, because the available combinations of colours can’t be made to match up satisfactorily without gaps; and those that tile it with a repeating pattern. But this is not the case; in fact there are sets of tiles which will tile the plane, but only in such a way that the pattern never repeats. The early sets of tiles with this property were rather complex, but later Roger Penrose devised a non-square set which consists of only two tiles.

The existence of such ‘non-periodic’ tilings is the fly in the ointment which essentially makes it impossible to come up with a general algorithm for deciding whether or not a given set of tiles will tile the plane. Again, we can spot some that clearly don’t, some that obviously do, and indeed some that demonstrably only do so non-periodically; but there is no general procedure that can deal with all cases.

I mentioned Roger Penrose; he, of course, has suggested that the mathematical insight or creativity which human beings use is provably a non-computable matter, and I believe it was the contemplation of how the human brain manages to spot whether a particular tricky set of tiles will tile the plane that led to this view (that’s not to say that human brains have an infallible ability tell whether sets of tiles tile the plane, or computations halt). Penrose suggests that mathematical creativity arises in some way from quantum interactions in microtubules; others disagree with his theory entirely, arguing, for example, that the brain just has a very large set of different good algorithms which when deployed flexibly or in combination look like something non-computational.

I should like to take a slightly different tack. Let’s consider the original frame problem. This was a problem for AI dealing with dynamic environments, where the position of objects, for example, might change. The program needed to keep track of things, so it needed to note when some factor had changed. It turned out, however, that it also needed to note all the things that hadn’t changed, and the list of things to be noted at every moment could rapidly become unmanageable. Daniel Dennett, perhaps unintentionally, generalised this into a broader problem where a robot was paralysed by the combinatorial explosion of things to consider or to rule out at every step.

Aren’t these problems in essence a matter of knowing when to stop, of being able to dismiss whole regions of possibility as irrelevant? Could we perhaps say the same of another notorious problem of cognitive science – Quine’s famous problem of the impossibility of radical translation. We can never be sure what the word ‘Gavagai’ means, because the list of possible interpretations goes on forever. Yes, some of the interpretations are obviously absurd – but how do we know that? Isn’t this, again, a question of somehow knowing when to stop, of being able to see that the process of considering whether ‘Gavagai’ means ‘rabbit or more than two mice’, ‘rabbit or more than three mice’ and so on isn’t suddenly going to become interesting.

Quine’s problem bears fairly directly on the problem of meaning, since the ability to see the meaning of a foreign word is not fundamentally different from the ability to see the meaning of words per se. And it seems to me a general property of intentionality, that to deal with it we have to know when to stop. When I point, the approximate line from my finger sweeps out an indefinitely large volume of space, and in principle anything in there could be what I mean; but we immediately pick out the salient object, beyond which we can tell the exploration isn’t going anywhere worth visiting.

The suggestion I wanted to clarify, then, is that the same sort of ability to see where things are going underlies both our creative capacity to spot instances of programs that don’t halt, or sets of tiles that cover the plane, and our ability to divine meanings and deal with intentionality. This would explain why computers have never been able to surmount their problems in this area and remain in essence as stolidly indifferent to real meaning as machines that never manipulated symbols.

Once again, I’m not suggesting that humans are infallible in dealing with meaning, nor that algorithms are useless. By providing scripts and sets of assumptions, we can improve the performance of AI in variable circumstances; by checking the other words in a piece of text, we can improve the ability of programs to do translation. But even if we could bring their performance up to a level where it superficially matched that of human beings, it seems there would still be radically different processes at work, processes that look non-computational.

Such is my feeling, at least; I certainly have no proof and no idea how the issue could even be formalised in a way that rendered it susceptible to proof. I suppose being difficult to formalise rather goes with the territory.

8 thoughts on “Is intentionality non-computable?

  1. It is my belief that human computation is based on our ability to estimate degrees of similarity. To me, this clearly explains how we are able to learn the meanings of words and to understand what a finger is pointing at. Such mechanisms would also seem to be applicable to sorting out tiling and halting questions, although I am not at all sure how to go about setting up these problems other than the fact that I can read about them and, more or less, follow the arguments as presented. Neither my similarity detectors nor the logical reasoning techniques I have built up based on those detectors have anywhere near the capabilities of those of Drs. Penrose, Turing, or Goedel.

    I am tempted here to jump on your suggestion that “the same sort of ability to see where things are going underlies both …” and I believe your suspicion is correct, but at this point, I really cannot justify the argument.

    In an earlier post (Ouroboros, comment 3), I tried to argue for a connection between similarity detectors and scripts. To the extent that a script in the traditional AI sense contains a fragment of an algorithm, this was wrong. However, some script implementations do involve some aspects of similarity detection. It was this that I referred to.

    The fundamental question would be how to compute using similarity detectors. Obviously, there are some related requirements, such as the ability to form links between the discovered similarities.

    It is my belief that the overall architecture of the logical reasoning portions of the cortex would work something along the lines of the ACT system proposed by Anderson at CMU, but with similarity links replacing his explicit computational steps. Math would be based on successively nested metaphors, such as described by Lakoff and Nunez. Such a system would implement a kind of stack with an ability to suspend computations in one region where a subtask has been set up and proceed with another subtask in a separate cortical region, eventually (maybe) resuming computations in the original subtask before the short-term memories representing that subtask fade away.

  2. There is this (hypothetical) program with speech recognition that only stops when Peter says it will not stop but never stops when Peter says it will. If I understood that halting problem correct then humans fail the halting problem as much as computers. Please correct me if I’m wrong.

  3. Yes, Peter. It is indeed slippery. That is part of why it works and why we don’t (yet) know how to put it to work. Another author whose work appears to be compatible with these ideas (and someone you have written about here) is Jeff Hawkins. He has not (publicly) speculated as wildly as I did about how a reasoning mechanism might work, but his basic perceptual systems are certainly along the same lines.

    Michael, How do you suppose Goedel and Turing could have ever understood the halting problem if they had been constrained to deal with the issue in the same way as the computers they talked about? It seems clear that the human brain has a different way of approaching such problems.

  4. Lloyd, I just wanted to show that we also fail at non-computable problems. I’m not so deep in math, but from explanations I read I understood the halting problem to fail because of the sort of programs I described above. I understood non-computable problems as not solvable, neither by AI’s nor by humans. Which makes be doubt that intentionality is non-computable.

  5. As I understand it, humans may have problems with solving halting problems, but for very different reasons than a computer (that is, a computer that was following an “algorithm”). In other words, I believe that it is possible, or at least will be when we know how, to build a computer that does not use an algorithm in the sense that Goedel and Turing were talking about. Goedel said (as I understand it) that it is not possible to construct an algorithm that would always be able to decide correctly whether some other given algorithm would eventually halt. A human might or might not be able to decide (about the same target algorithm), but I believe the human would succeed or fail for very different reasons. And I believe, although rather less certainly, that it is possible to build a computer that computes in a way other than following a fixed algorithm. I know that this claim raises all sorts of issues of what an algorithm really is, but I believe Goedel meant it in a narrow way that can be worked around. My first defense would be that a non-algorithmic way of computing would always be getting new and unpredictable input. I believe this immediately rules out the Goedel type of algorithm.

  6. Michael, I was hoping to come up with an example. I am glad you came up with this nice one, even though it is not a completely correct one. The output of the algorithm (halt or not halt) cannot change once it is “set in motion” (once it has taken its input, and started the “calculation”). In particular, it cannot change its final state (halt or not halt) based on what Peter says. Otherwise you end up with a loop back flip-flop digital circuit (an oscillator) which is by design not going to settle on any state (states of “halt” or “not halt”). This is not, I believe, the intent of the halting problem being discussed here.

    I did a search, and found in http://www.answers.com/topic/halting-problem an example that shows a human is not necessarily able to tell if a certain algorithm will halt, as you may be happy to know. The example is a short program involved several lines of pseudocode (probably less than 20 lines if you do it in C++) asking a computer to run until it comes across the first odd perfect number (a perfect number is an integer of which its factors, including “1” and itself, adds up to twice the number. For example, 6 is factored into 6,3,2,1, and 6+3+2+1=2×6.) A human cannot determine if this algorithm will ever halt because all known perfect numbers are even. It is still an unsolved mathematical problem as whether an odd perfect number even exists. Best human mathematicians don’t know. So nobody knows. So, we failed the halting test, at least for now, with this example.

    Even though I like to agree with John Searle’s Chinese room argument, that true “understanding” cannot be achieved by following instructions, or algorithm, I can see evidence of intentionality being computable. At least behaviorally it can appear computable. I used it in the sense that it is something one can simulate with an algorithmic machine.

    As an example, let’s look at emperor penguin instead of human. (Unless you believe human is fundamentally different from penguin in areas of intentionality, otherwise considering penguin is equally valid). Let’s look at their mating behavior. At certain time of the year, depending on the penguin’s age, a penguin will have the urge to swim to the south pole to mate. Then come along with this urge is the need to dance on ice, the need to fight, the need to mate, the emotional need (our interpretation of their behavior) to take care of the egg once it is laid, the need to hate some other penguin if some other penguin tries to steal its egg, or to feel extreme pain if an egg is lost, so painful that it feels the urge to steal other’s egg, etc etc (yes, I am talking out of the movie “March of the Penguins”). One can easily imagine writing a code for a virtual penguin of which the need to do certain thing is quantified by a number, determined by some “external” factors. The higher that number is, the more likely that the virtual penguin will do certain things. The life of the virtual penguin then evolves around the balancing act to satisfy all these numbers. Same goes to creating a virtual human, like a realistic version of the Sims family. Who can argue that a human life is not a balancing acts trying to satisfy all these different needs, internal or external, and along the way, generating all sorts of qualia, “decisions”, and “inventions”?

    Human behaviors are surprisingly predictable, according to some crime experts. I can see hints of algorithm at work here. As far as human creativity is concerned, Necessity is the mother of creation. Human creativity is not unlike a champion chess computer playing against another champion chess playing computer, with both “inventing” some new moves in the process of doing so.

    For an autistic person, when Peter points his finger, that person may not pick up the salient object Peter intended, perhaps because the algorithm running in the brain may be slightly different?

  7. Kar, check the “Recognizing partial solutions” part in the page you linked me. There the “partial halting solver recognizer” (PHSR) is used within the program to produce a contradiction. If you compare software to humans then I think in this case Peter would have to play the role of the PHSR and so his “output” could indeed be used.

    By the way I also think that the “when to stop” thing can be done by computers. I see it as a problem of selecting search-spaces and the depth of the searches within those spaces. No one doubts computers can stop a search (for any number of reasons), but I see the selection of the searchspaces itself as just another search-space. While I don’t know the exact algorithm it probably has a lot to do with comparing situations to previously experienced situations and selecting those search-spaces which seemed to have worked before. The hardest border is “surviving” which originally did reduce the broadest ranges of possible search-spaces. That hard border has been refined by evolution so that humans already have a more restricted range of search-spaces and the borders are defined less lethal by triggers like pain & hunger. Humans don’t check every possible solution. The checked range is already restricted for example by the way neurons do work. We have the high level of language which not only allows for a great simplified formulation of complex thoughts but does allow us to do things like refine the search of places to check which might be of interest to someone pointing with a finger in open space. Still the pre-selection is rather simpler. It’s done by the usual grouping of elements done by neurons based on things like similarity of colors, forms, etc. which are nowadays also used in image-recognition. There’s some threshold below which objects are not longer of interest and that’s when we lose the indention of considering them as possible targets. So intention is what’s left over after that pre-selection.

Leave a Reply

Your email address will not be published. Required fields are marked *