knight 1This is the first of four posts about key ideas from my book The Shadow of Consciousness. We start with the so-called Easy Problem, about how the human mind does its problem-solving, organism-guiding thing. If robots have Artificial Intelligence, we might call this the problem of Natural Intelligence.

I suggest that the real difficulty here is with what I call inexhaustible problems – a family of issues which includes non-computability, but goes much wider. For the moment all I aim to do is establish that this is a meaningful group of problems and just suggest what the answer might be.

It’s one of the ironies of the artificial intelligence project that Alan Turing both raised the flag for the charge and also set up one of the most serious obstacles. He declared that by the end of the twentieth century we should be able to speak of machines thinking without expecting to be contradicted; but he had already established, in his solution to the Halting Problem, that certain questions are unanswerable by the Universal Turing Machine and hence by the computers that approximate it. The human mind, though, is able to deal with these problems: so he seemed to have identified a wide gulf separating the human and computational performances he thought would come to be indistinguishable.

Turing himself said it was, in effect, merely an article of faith that the human mind did not ultimately, in respect of some problems, suffer the same kind of limitations as a computer; no-one had offered to prove it.

Non-computability, at any rate, was found to arise for a large set of problems; another classic example being the Tiling Problem. This relates to sets of tiles whose edges match, or fail to match, rather like dominoes. We can imagine that the tiles are square, with each edge a different colour, and that the rule is that wherever two edges meet, they must be the same colour. Certain sets of tiles will fit together in such a way that they will tile the plane: cover an infinite flat surface: others won’t – after a while it becomes impossible to place another tile that matches. The problem is to determine whether any given set will tile the plane or not. This turns out unexpectedly to be a problem computers cannot answer. For certain sets of tiles, an algorithmic approach works fine; those that fail to tile the plain quite rapidly, and those that do so by forming repeating patterns like wallpaper. The fly in the ointment is that some elegant sets of tiles will cover the plane indefinitely, but only in a non-repeating, aperiodic way; when confronted with these, computational processes run on forever, unable to establish that the pattern will never begin repeating. Human beings, by resorting to other kinds of reasoning, can determine that these sets do indeed tile the plane.

Roger Penrose, who designed some examples of these aperiodic sets of tiles, also took up the implicit challenge thrown down by Turing, by attempting to prove that human thought is not affected by the limitations of computation. Penrose offered a proof that human mathematicians are not using a knowably sound algorithm to reach their conclusions. He did this by providing a cunningly self-referential proposition stated in an arbitrary formal algebraic system; it can be shown that the proposition cannot be proved within the system, but it is also the case that human beings can see that in fact it must be true. Since all computers are running formal systems, they must be affected by this limitation, whereas human beings could perform the same extra-systemic reasoning whatever formal system was being used – so they cannot be affected in the same way.

Besides the fact that the human mind is not restricted to a formal system, Penrose established that it out-performs the machine by looking at meanings; the proposition in his proof is seen to be true because of what it says, not because of its formal syntactical properties.

Why is it that machines fail on these challenges, and how? In all these cases of non-computability the problem is that the machines start on processes which continue forever. The Turing Machine never halts, the tiling patterns never stop getting bigger – and indeed, in Penrose’s proof the list of potential proofs which has to examined is similarly infinite. I think this rigorous kind of non-computability provides the sharpest, hardest-edged examples of a wider and vaguer family of problems arising from inexhaustibility.

A notable example of inexhaustibility in the wider sense is the Frame Problem, or at least its broader, philosophical version. In Dennett’s classic exposition, a robot fails to notice an important fact; the trolley that carries its spare battery also bears a bomb. Pulling out the trolley has fatal consequences. The second version of the robot looks for things that might interfere with its safely regaining the battery, but is paralysed by the attempt to consider every logically possible deduction about the consequences of moving the trolley. A third robot is designed to identify only relevant events, but is equally paralysed by the task of considering the relevance of every possible deduction.

This problem is not so sharply defined as the Halting Problem or the Tiling Problem, but I think it’s clear that there is some resemblance; here again computation fails when faced with an inexhaustible range of items. Combinatorial explosion is often invoked in these cases – the idea that when you begin looking at permutations of elements the number of combinations rises exponentially, too rapidly to cope with: that’s not wrong, but I think the difficulty is deeper and arises earlier. Never mind combinations: even the initial range of possible elements for the AI to consider is already indefinably large.

Inexhaustible problems are not confined to AI. I think another example is Quine’s indeterminacy of translation. Quine considered the challenge of interpreting an unknown language by relating the words used to the circumstances in which they were uttered. Roughly speaking, if the word “rabbit” is used exactly when a rabbit is visible, that’s what it must mean; and through a series of observations we can learn the whole language. Unfortunately, it turns out that there is always an endless series of other things which the speaker might mean. Common sense easily rejects most of them – who on earth would talk about “sets of undetached rabbit parts”? – but what is the rigorous method that explains and justifies the conclusions that common sense reaches so easily? I said this was not an AI problem, but in a way it feels like one; arguably Quine was looking for the kind of method that could be turned into an algorithm.

In this case, we have another clue to what is going on with inexhaustible problems, albeit one which itself leads to a further problem. Quine assumed that the understanding of language was essentially a matter of decoding; we take the symbols and decode the meaning, process the meaning and recode the result into a new set of symbols. We know now that it doesn’t really work like that: human language rests very heavily on something quite different; the pragmatic reading of implicatures. We are able to understand other people because we assume they are telling us what is most relevant, and that grounds all kinds of conclusions which cannot be decoded from their words alone.

A final example of inexhaustibility requires us to tread in the footsteps of giants; David Hume, the Man Who Woke Kant, discovered a fundamental problem with cause and effect. How can we tell that A causes B? B consistently follows A, but so what? Things often follow other things for a while and then stop. The law of induction allows us to conclude that if B is regularly followed by A, we can conclude that it will go on doing so. But what justifies the law of induction? After all, many potential inductions are obviously false. Until quite recently a reasonable induction told us that Presidents of the United States were always white men.

Dennett pointed out that, although they are not the same, the Frame Problem and Hume’s problem have a similar feel. They appear quite insoluble, yet ordinary human thought deals with them so easily it’s sometimes hard to believe the problems are real. It’s hard to escape the conclusion that the human mind has a faculty which deals with inexhaustible problems by some non-computational means. Over and over again we find that the human approach to these problems depends on a grasp of relevance or meaning; no algorithmic approach to either has been found.

So I think we need to recognise that this wider class of inexhaustible problem exists and has some common features. Common features suggest there might be a common solution, but what is it? Cutting to the chase, I think that in essence the special human faculty which lets us handle these problems so readily is simply recognition. Recognition is the ability to respond to entities in the world, and the ability to recognise larger entities as well as smaller ones within them opens the way to ‘seeing where things are going’ in a way that lets us deal with inexhaustible problems.

As I suggested recently, recognition is necessarily non-algorithmic. To apply rules, we need to have in mind the entities to which the rules apply. Unless these are just given, they have to be recognised. If recognition itself worked on the basis of rules, it would require us to identify a lower set of entities first – which again, could only be done by recognition, and so on indefinitely.

In our intellectual tradition, an informal basis like this feels unsatisfying, because we want proofs; we want something like Euclid, or like an Aristotelian syllogism. Hume took it that cause and effect could only be justified by either induction or deduction; what he really needed was recognition: recognition of the underlying entity of which both cause and effect are part. When we see that B is the result of A, we are really recognising that B is A a little later and transformed according to the laws of nature. Indeed, I’d argue that sometimes there is no transformation: the table sitting quietly over there is the cause of its own existence a few moments later.

As a matter of fact I claim that while induction relies directly on recognising underlying entities, even logical deduction is actually dependent on seeing the essential identity, under the laws of logic, of two propositions.

Maybe you’re provisionally willing to entertain the idea that recognition might work as a basis  for induction, sort of.  But how, you ask, does recognition deal with all the other problems? I said that inexhaustible problems call for mastery of meaning and relevance: how does recognition account for those? I’ll try to answer that in part 2.

39 Comments

  1. 1. Hunt says:

    Human common sense does suffer some deficits from the frame problem. This is one of the first ‘slips of the mystery mask’ I hope to mention, as evoked in me by your book (which is excellent, and has filled so many gaps in my amateur knowledge…I can’t even begin) We’re justifiably in awe of our minds’ abilities to do many things effortlessly, but every so often, like when you find yourself driving to work on the weekend instead of the market because that’s what you’re programmed to do, the mask does slip. I don’t know about others, but these are the moments when I feel most like a machine.

    The Human Frame Problem in a joke: A man has locked himself out of his new car and can only get towed to the dealer. He talks to the service manager while a mechanic uses a lockout tool to open the door. While talking they notice that the mechanic has the passenger door open with the tool and is now working on the driver side. “Yeah, I got that one,” he says, “now I’m working on this one.”

    Now, I argue this is more than mere stupidity. It’s the Human Frame Problem, and while we’re very good at usually resolving it, it crops up with alarming frequency. It’s the reason “fresh eyes” are important, and why you can waste hours doing something entirely useless while there’s a perfectly simple solution one step away.

    I agree, recognition, pattern or Frame (in the Minsky sense) is a good candidate for how we do it. How the recognitions come to us seem to be entirely subconscious; we can’t will them. Therefore if it’s a logical process, it’s one built into the architecture of the brain, probably in the cortex.

    On a more general level, it also makes great sense that semantics are employed to prune the decision tree, or combining the two, recognition of semantic structures, the structures of meaning. Crudely speaking, semantics are considerations at the meta, or meta-meta (etc.) level. I doubt much is known about how the brain does this so effortlessly either. I picture a workspace (not necessarily as used in other topics) where certain elements are tagged as “datums of interest.” In our joke, these will include, the car, the tool, the doors, the locks (though somehow the lock knobs seem to have been left out). Somehow the brain is able to link these in a semantic network that it can use to problem solve (or not).

    Once a workspace has served its purpose, it’s immediately torn down or merely erased and reused–though of course that can’t be entirely right, since the brain is like an ipad, everything continues to run, at least suspended in the background, ready to be resumed even if context has changed radically. (This is the problem with computational metaphors; if brains are in some sense computers, their operation are quite different than computers today.) Other workspaces are constructed in its place in endless succession.

  2. 2. Peter says:

    Thanks, Hunt: I don’t think we’re a million miles apart on this. It is true that human beings can make this kind of mistake too, something we shouldn’t forget. They don’t usually do as badly as the example in your joke (which is why it’s a joke, of course), and they don’t get totally paralysed like Dennett’s second or third robots; but the point is valid. In the case of translation software people sometimes say, no matter how big your database, there will still be unusual examples where the computer gets it wrong. But human beings get unusual examples wrong too, at some level…

  3. 3. Sergio Graziosi says:

    Peter,
    this is good stuff (am I the only one that hasn’t already finished your book?). Your focus on recognition is promising, I’m not saying this just to be polite.
    It is promising, and you are rightly pointing out that it’s connected and potentially able to resolve a wide range of somehow intuitively connected issues. While reading this post, even if I knew where you were going (from your “Why no AGI” post), I found myself worrying that I might agree with you entirely.
    Naturally, my worry was unnecessary! I do have a problem with how you fit “Non-computability” in the picture, and it does go back to the extensions of the Church-Turing thesis. I’ll sound scholastic, but I have to rehash the argument, using Deutsch words:

    everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.

    Therefore, if you need to rely on any notion of non-computability, I would expect you to either deny the proposition above or postulate that some of the things that happen in the brain do not obey the laws of physics. I know you are not doing the latter, but you may be implying the former. If you follow a third route, I’ve missed it entirely.

    However, I don’t think we need to circumvent the quoted proposition at all, we can embrace it, and restate the problem you are focusing on in a new (and in my own opinion, more useful) way:

    The problem with recognition is that is seems to be anomic: at first sight, it seems impossible to write down the rules (the algorithm) that may make it work. Hence, searching for general purpose recognition algorithms becomes a very important quest.

    Given the generally accepted “simulation” feasibility, why should we assume that something we (humans) can do has to be anomic, just because we don’t see how else could it be? The fact is that I trust neither our intuitions nor theoretical knowledge: if you can’t empirically verify a claim, however strong and logical it seems, you should not take it for true.

    Next step: recognition requires pre-existing knowledge. Do we agree on this? You can’t recognise anything at all if you know absolutely nothing. More than that, recognition requires a whole lot of knowledge: I would go as far as saying that nothing that remotely resembles a general purpose recognition system can work without employing some form of abstraction. This is the result of how we conceptualise recognition: it’s not just about recognising specific instances “that is my mug” (this is “easy”) it’s about recognising classes “that is a mug” (doing this is computationally difficult). In the latter example, we assume the existence of abstract knowledge (the idea of mugs) as part of our definition of the task.
    Hence, it is no surprise the we don’t see how a “simple” algorithm might generalise and work for mug recognition as well as anything else, including nested abstractions “this is a catchy tune”. We don’t expect algorithms to contain data (lots of knowledge), and rightly so.

    So, how do we solve the issue? But of course, you already knew where I was going… We need an algorithm that takes some input and uses it to generate its own “abstractions”. It then stores these abstraction and tries to match new input to the already present “knowledge”. If no convincing match is found, it reverts to creating a new abstract class as it did on day zero. Importantly, even this process needs some “prior” knowledge, in order to start generating meaningful classes: at the very least, one assumption is present already in the description of the process – the input will include meaningful regularities (making the idea of discerning such regularities useful).

    I have very strong ideas (aaargh, they are intuitions, and therefore I struggle to distrust them despite their strength!) on how to actually design such an algorithm (but we know that proper computer scientists are making progress in their own ways), and yes, it relies on some sort of Bayesian active inference, but it also needs other non-trivial ingredients. A hint was provided by Hunt #4 in the “Why no AGI” discussion.

  4. 4. Scott Bakker says:

    Here’s an (apparently long-winded!) eliminativist’s view and question:

    Computability generally refers to ‘tight’ deterministic processes, and computers to systems capable of conserving systematic relationships across astronomical numbers of such processes. The frame problem is precisely the problem of deterministically updating the configuration of the whole system to accord with a single simple environmental transformation. But our brains are gamblers, not coders, and the dice have been loaded by natural selection. All you need is the right set of inborn priors to get a Bayesian bootstrapping process off the ground. The system is tuned to gravitate to certain (dopamine releasing) states optimally geared to certain environmental conditions (but easily short-circuited, as addiction demonstrates). ‘Relevant recognition’ arises from the knapping of those sensitivities that trigger reinforcement when integrated into behaviour.

    The brain simply neglects the frame. The problematic knock-on effects of basic behaviours are learned post hoc. It can do this because it is the biohistorical product of attending to the right things. What makes the frame problem so apparently perplexing is where we find ourselves in this vast, geologically ancient system. Reflection only has access to the *winners,* and no access whatsoever to the losers, let alone the tournament. Something that is the product of a vast amount of work, a secretion of life, becomes an inexplicable particle possessing inexplicable powers. As elsewhere in science, we are motes on the back of occluded mountains when it comes to meaning.

    Now I would argue that the above provides a pretty good reason to think relevance isn’t really a problem. But even if you deny that conclusion, it still seems that you’re on the hook to explain the role ‘reflection’ plays in your founding assumption. We know that consciousness is an extreme cherry-picker, and that it presents its cherries as the whole existential bowl. Given that our second-order sense of meaning and relevance is almost certainly cherry-picked for first-order problem-solving (what else would drive the evolution of those capacities), why should we trust those intuitions in theoretical contexts?

  5. 5. Hunt says:

    One interesting facet to this is the role of attention. Attention seems to have a peripheral, but not wholly independent function. It’s kind of a sieve through which the recognition of relevance must flow. We’ve probably all had experiences where vital observations were missed due to inappropriately focused attention; so attention has some kind of influence over the process. If anything could be identified as a willed process, it would be our attention, but I would contend that it’s not a source process but rather a filtering one. Psychedelic drugs, or heightened states of meditative awareness, probably loosen this process and allow all experiences to flow into us unfettered.

  6. 6. Peter says:

    Sergio,

    if you need to rely on any notion of non-computability, I would expect you to either deny the proposition above or postulate that some of the things that happen in the brain do not obey the laws of physics

    I don’t deny they obey the laws of physics, I just claim that there are things physics doesn’t say anything about. I feel sorry for poor old physics, sometimes, pressed to be a universal theory of everything when all it signed up for was explaining physical stuff. There’s no theory of meaning in physics, and why should there be?
    On universality of computation, let me suggest an analogy. Instead of consciousness, let’s suppose we were trying to explain echolocation. The computationalists say, hey, you claim there’s some spooky sound thing going on here, but we can emulate it to any required degree of fidelity. All we do is list the location of all the obstacles and all the insects, and then Batbot can perform exactly like one of your biological bats. He behaves exactly like one of your famous echo-locators, so really your claim that there’s anything non-computational going on just falls apart.
    No, I don’t think recognition requires prior knowledge: if it did I don’t know how we’d ever recognise new things!

  7. 7. Peter says:

    Scott,
    I don’t think any built-in priors or frames can cope with meaning or relevance because of their extremely protean qualities. I can think about anything; anything can turn out relevant to anything. You can’t have a rule-driven system that is that open ended. (And it isn’t even that plausible to think that evolution already equipped us with all the conceptual framework we need to, say, set up a web site about WWII planes).
    I don’t think what we have is a carefully constrained system designed to provide closely managed problem-solving and narrowly cherry picked data. Yes, attention is narrow and selective, but the beam can sweep around anywhere; what happened is that evolution found that a big cortex (expanded faculty of recognition) yielded a whole new realm of flexibility of behaviour, that yielded big benefits.
    Hope I’m not missing the point!

  8. 8. Hunt says:

    Child developmental psychology shows that the brain switches cognitive strategies successively. It’s tempting to suggest that this happens as the cortex “comes online,” which might indicate that children are recapitulating more primitive thought strategies, but that’s pure speculation. It might also suggest that the brain rewrites its rule-system as it develops. Switching back to the computer metaphor, this would be like a program altering its own code. So the brain’s hardware comes pre-coded with a very basic rule-driven system, probably governed by simple associations (really, what else do you suppose babies are doing?).

    Of course, when talking about brains, there’s probably no sensible software/hardware distinction, but that’s only “probably” because the cortical columns may well be reconfigured in a very software-like fashion. There may be some clue here about resolving the non-computability problem, since the brain isn’t necessarily limited to a single logical system, even moment to moment. I realize that doesn’t mean it’s more than a Universal Turing Machine, but the fact that it might be some heuristical system employing a set of subordinate logical systems does (for me) point toward a possible solution.

  9. 9. Sergio Graziosi says:

    @Peter #6 & 7,
    we have a genuine disagreement here, bring us a pint, presto!

    I don’t feel sorry for physicists, I welcome everything that reminds them how much we don’t know, and how provisional and local our knowledge is. There is no theory of meaning, not one I wouldn’t dismiss as wishful, or one that does link to physical stuff. We have (Shannon’s) Information Theory (SIT), and some wishful attempts to plug meaning into it, backed more solidly on the other side by ever more detailed ideas on the link between SIT and physics.
    The job of building a naturalistic/materialistic theory of meaning is part of the overall consciousness umbrella: there can’t be meaning without first-person perspectives, perhaps we can agree on this.

    I really aesthetically like your BatBot analogy, you know why I (and probably we all) do. However, I don’t think you’re answering my question, sorry. Your parallel is pleasing also because it follows the p-zombie trail: your BatBot is a zombie that behaves the same as a bat, but does so by following completely different principles. Thus, you can use it to say that many computation-based explanations of consciousness (or related sub-systems) are irrelevant, specifically because they model something else. I’d be with you if that was your point, but I don’t think it is. I think you are saying that computational models have to model/simulate something else, even in principle.
    Hence, you are reinstating your main point: some things that the brain does can’t be simulated as computations. In practice, this is true, but it is true because we don’t know how to do it. It is also true that we don’t know if it’s possible in theory, but the evidence we have (what allows me to back up Deutsch’s words) does suggest that it should be possible.

    To recap: we are back were we started. You say we can’t compute meaning because we would need to compute ‘recognition’ and recognition seems to be inevitably anomic.
    I say: yes recognition looks anomic so it’s a promising target. We should try to “nomicisise” it (discover the hidden rules). You then retort saying “that’s impossible”, and I ask “how can you possibly know that? Here is an indication on how to start trying”. We genuinely disagree because you make the strong case “recognition can’t be algorithmic, not even in principle”, but you didn’t really answer my two objections:
    1. If all the brain does is physical, the prediction (to our best knowledge) is that it should be possible to replicate everything the brain does.
    2. Scott, Hunt and I are hinting, each in our own way, at how to solve the problem you have so sharply defined. We then ask “should we not even try?”.

    Your answer to 1. isn’t really an answer (it’s a sharp critique of a mistake made too often, but doesn’t address the point), your answer to 2. is that, no, there is no point in trying because you don’t believe we are born with prior knowledge, and/or deny that recognition requires prior knowledge. In doing so you are taking a very unfashionable route (Kudos for that!) and suggesting we should revive the concept of blank slate. I can’t follow you there because psychological and cognitive evidence is overwhelming: there is no blank slate, we are born with an uncountable amount of prior assumptions, we struggle to see them, because we are born with them, that’s all.

    For example, I’ll try to anticipate Scott:

    I can think about anything; anything can turn out relevant to anything.

    How can you be so sure? You can think of many things, and you certainly can’t think of all of them at once. If you can’t think of something, how would you even notice? The moment you think “wait a minute, I can’t think of that” you have contradicted your thought. Hence, you will never be properly aware of your own cognitive limitations (more and more I find that this is the foundation of my thinking). Similarly, the degrees of freedom of your own thinking are indeed overwhelming, that’s because we can keep in working memory only a handful (typically less than 10) concepts, but you distinctively do have the possibility of generating an enormous amount of different concepts. To idly speculate on the numbers, simple combinatorial calculations on the number of possible different arrangements of synaptic connections in a typical human brain dwarf the total number of atoms in the observable universe. In other words, it is guaranteed that we can’t even begin to appreciate the scale of our ideas-generating potential. Thus, the easy prediction is that it will always feel like we can think of “anything”, but this feeling alone (no matter how compelling) is not evidence of its truthfulness. Our protean potential defines our upper-limit, and thus it necessarily looks extreme to us.

    Furthermore:

    And it isn’t even that plausible to think that evolution already equipped us with all the conceptual framework we need to, say, set up a web site about WWII planes.

    Indeed, evolution didn’t. It set us up with the potential of doing so, and much more. What I am (we are?) suggesting is to look for algorithms that have the potential of generating their own idiosyncratic meanings, their own classifications that can then be re-utilised. Such an algorithm would invalidate your “open-ended” argument, would it not? I thought that was the reason why we invested so much time discussing the strange behaviours (and limitations) of DNNs a while ago…

    Finally: apologies for length and directness. I enjoy discussing but would hate if you’d feel under pressure. We do agree that agreeing isn’t what we do :-).

  10. 10. Sci says:

    @Hunt: Interesting note about attention. A friend of mine put me on to this article regarding attention & Heidegger:

    http://opinionator.blogs.nytimes.com/2015/03/30/heideggers-philosophy-why-our-presence-matters/?_r=0

    “…or now, I would like to focus on how Heidegger treats a topic of considerable importance in cognitive science, which is the phenomenon of attention.

    On this basis I will show that, for Heidegger, not only are we in direct contact with the people and things of this world, but also that our presence matters for how they are made manifest — how they come into presence — in the full potential that is associated with the sort of beings that they are…”

  11. 11. Arnold Trehub says:

    Sergio,

    Algorithms are insufficiently constrained. It must be subjectively organized biological mechanisms that give us meaning. See “Self-Directed Learning in a Complex Environment” here:

    http://www.people.umass.edu/trehub/thecognitivebrain/chapter12.pdf

    If a non-biological artifact can realize subjectivity, then it can evoke meanings. We have no evidence of this possibility to date.

  12. 12. Peter says:

    Sergio,

    It’s physics I feel sorry for, not physicists!

    You keep implying that I’m just sort of assuming recognition is anomic, but I’ve already mentioned what I think is a good argument (if not a proof) that it must be. That’s how I know; the onus is on you to show I’m wrong.
    It’s not the case that everything the brain does is physical: what’s true is that everything the brain does has a physical correlate.
    Think about a novel: say Pride and Prejudice. What’s the physical definition of Pride and Prejudice? A paticular size of book, with particular characters? But we know the physical properties of a book can vary in any way you like: it can be any size, made of anything. the text can be in code or in another language: that doesn’t stop it being Pride and Prejudice. It can be recorded on magnetic tape or sent by semaphore, put on a CD or enacted in signs. Your universal computationalists will insist that if we list all the extant copies in whatever formats we can find, we’ll have a good working approximation of what Pride and Prejudice is: but we won’t, because we’ve missed the key point that Pride and Prejudice is a narrative: anything that means that narrative counts as a copy. If we can’t define a novel in physical terms, how likely is it that we can capture the essence of a novelists’s mind?
    I agree you’re in a bit of a bind when it comes to mentioning something no-one can think of! Perhaps it will help if I suggest another way of looking at it: I assert that it is impossible to provide a finite list of all the things we might think about.
    You’re overstating my position if you think I believe we’re blank slates: we might come with lots of stuff ready installed: just not everything. We’re perfectly able to recognise new things that were never in our minds before and never crossed the paths of any of our evolutionary ancestors. Why’s that difficult?

  13. 13. Scott Bakker says:

    Peter: Well, rather than deny the intuitive appeal of your position, let me offer a diagnosis of what I think lies behind it. When you reflect on your thoughts, neglect means you have no inkling of the winner-take-all processes behind them, nor the processes they feed into. Thought seems to hang without any more constraint than the ‘sense’ (what little reflection can cull from verbal report processes) of the accompanying expression, and perhaps the object. It seems like ‘you can think anything,’ but in point of fact, your thought, my thought, everyone’s thought follows some pretty predictable ruts.

    Reflection plucks what it can with no inkling of the larger processes involved, so it becomes intuitive to think thought is unconstrained, even though it possesses inputs and outputs like everything else. Since all cognition involves constraints, we make fetishes of the regularities we perceive, which, arising from oblivion, seem to constrain us magically. Philosophers invent whole spooky functional economies in efforts to ‘explain’ these constraints.

    Heuristics and neglect allow a different kind of high-altitude explanation of those constraints, one that drains the intentional swamp. Imagine your car for a moment, hold it in your ‘mind’s eye.’ Where does the sunlight flash most? What direction does the shadow fall? How much mud is there on the rockers? Are there finger smudges on the door handles? I could go on and on listing the number of questions you could answer were you actually looking at your car, but are actually quite pointless to ask in the case of visual imagery. The information visual imagery provides just isn’t in the business of solving those kinds of questions. (Check out, https://rsbakker.wordpress.com/2015/02/07/introspection-explained/)

    Anything can be ‘relevant’ for anything *given some bounded cognitive system.* Because that’s all relevance is. We’re hardwired to essentialize, to see systematic properties in substantive terms, so we have an antipathy to seeing evolution in terms that blur the edges of the organism. All our organs, for instance, are fitted to function together, dedicated components that would seem like magic, were it not for evolution. Organisms and environments, however, are also fitted to function together–how could it be any other way? The systematicity involved is much more slippery, but the necessity for adaptation is every bit as crucial. As something sculpted by environments, cognition is rife with innumerable ‘priors,’ a template of raw, differential sensitivities (for faces, etc.) capable of bootstrapping a world.

  14. 14. Callan S. says:

    If I’m intuiting the halting problem right, then I’d say a just as plausbile explanation is that confabulation is beneficial in an evolutionary sense. You can’t just stop in life – you’ll end up dead. But if you can’t compute an answer…?

    Well actually, just making up an answer will do. Making up an answer has, by pure luck, a chance of being the right answer. Which is far better, in a Darwinian sense, than going blue screen and locking up. Even if it ends up false. Even if it kills people not of your clan/gene line.

    And there would be no real Darwinian benefit toward the organism developing a sense that it is confabulating. Instead, the processor organism would say it had found the answer. Because it would not be aware it’s confabulating the answer because no Darwinian benefit to it knowing such.

    Although in a long term sense, looking at a history of war and being a product of what managed to survive war (either due to brutal marshal ability or diplomacy ability), the diplomacy breed might begin to recognise the inclination toward confabulation.

    A notable example of inexhaustibility in the wider sense is the Frame Problem, or at least its broader, philosophical version. In Dennett’s classic exposition, a robot fails to notice an important fact; the trolley that carries its spare battery also bears a bomb. Pulling out the trolley has fatal consequences. The second version of the robot looks for things that might interfere with its safely regaining the battery, but is paralysed by the attempt to consider every logically possible deduction about the consequences of moving the trolley. A third robot is designed to identify only relevant events, but is equally paralysed by the task of considering the relevance of every possible deduction.

    More evidence toward a ‘make up an answer and get on with it’ being the optimal survival method (short term, anyway), I’d think.

    Human beings, by resorting to other kinds of reasoning, can determine that these sets do indeed tile the plane.

    Cognitive science testing time – setting up a test where the tiles do not tile the plane, but appear to. Thus triggering the ‘reasoning’ in subject that they do indeed tile the plane. Which would be a prime example of Scott’s favorite word ‘heuristics’ being in action. ‘Close enough is good enough’ thinking.

    Assuming the test does trigger ‘yes, it will tile’ responces in 90%+ participants when it doesn’t. If it doesn’t, then I guess the question does remain open.

    Since all computers are running formal systems, they must be affected by this limitation, whereas human beings could perform the same extra-systemic reasoning whatever formal system was being used

    Humans can say ‘close enough is good enough’, which we don’t accept in our computers although it’s inherant in ourselves – we don’t hold ourselves to the same standards we hold our computers to because…close enough is good enough. Our self perception, despite the many math errors we all commit, is that we’re on par with the computer or better because close enough is good enough. ‘Close enough’ does not recognise the use of ‘close enough’ because…it was already close enough before getting to that. And that’s good enough. And we don’t hold ourselves to the same stadards as the computer in terms of result – because that was good enough.

    Yeah, a bit convoluted. That’s because ‘close enough is good enough’ always treats its own process as ‘true’ – as a result of its short cut.

    Common sense easily rejects most of them – who on earth would talk about “sets of undetached rabbit parts”?

    Hah! I remember a funny clipping my dad had kept, of some state that had defined a chicken as ‘a series of undetached chicken parts’ – literally exactly what you’ve mentioned!

  15. 15. Hunt says:

    Sci – I’m not sure how much I understand that article, or Heidegger’s views on attention, but it did inspire me to review my position on attention. Now I suspect that attention as filtered perception actually has the same liabilities as the frame problem, in fact it seems to be the frame problem, because of all myriad possible interpretations of perception how does one decide which to pay attention to? Perhaps attention is better viewed as a creative process, rather than a filtering one.

    Callan –

    Humans can say ‘close enough is good enough’, which we don’t accept in our computers although it’s inherant in ourselves – we don’t hold ourselves to the same standards we hold our computers to because…close enough is good enough.

    I have to admit I suspect this is something like what people do, but the implication is that those people who say they can see the solution to non-computable problems by using higher level semantic reasoning are actually mistaken, that Roger Penrose is mistaken (!)

  16. 16. Sergio Graziosi says:

    @Peter #12 (and Arnold #11)

    You do have an argument for recognition being anomic. My attempt to show why I think you are wrong relies on the possibility of having algorithms that generate new categories in response of perceiving something entirely new. The link Arnold provided shows that this is possible (while of course my “in theory” ramblings can always fail to convince), and I’m sure the alternative ways to achieve the same result are virtually infinite. This, to me (but not you) settles the matter. If we can, not only in principle, but also in practice, build a system that learns to recognise new objects unsupervised, we have a proof of principle: (general purpose) recognition can be rule-based. Thus, we have exposed one of the difficult steps that are necessary to build a link between physics and meaning. We haven’t finished the bridge, but we have planted a pillar in one of the tricky spots.
    That’s where I am. I’m pushing you as hard as I can because I do think that focusing on recognition is a promising approach, but I can’t see how your (local) conclusion follows. To me, your positive case for recognition being necessarily anomic just doesn’t stand up for scrutiny. I’ve tried different approaches, but I can keep throwing counterarguments at you, as long as you promise to stop me when you’ve had enough!

    Here is another attempt, you say:

    To apply rules, we need to have in mind the entities to which the rules apply. Unless these are just given, they have to be recognised. If recognition itself worked on the basis of rules, it would require us to identify a lower set of entities first – which again, could only be done by recognition, and so on indefinitely.

    I agree with every proposition above, but note that they hide the reason why your conclusion doesn’t follow. Hunt in the previous thread already hinted at why, and you sort-of agreed, so I’ve initially focused on other ways to challenge your position. This time, I’ll make my own version of (how I read) Hunt’s argument, starting from your own reaction: “Yes, I’m guessing that recognition ‘bottoms out’ in some sort of neural network structure, which seems plausible enough”.

    So, what we have there is something that does not require an infinite regression of lower rule-sets: recursion does have a bottom, and we expect to find it at the neuronal level. In particular, “we [do] need to [already] have in mind the entities to which the rules apply” and at the lowest level, these entities are the sensory signals that the brain receives as inputs (it’s more complicated than this, but this level of abstraction can work for our discussion). Below the ‘input signals’, there isn’t nothing to which recognition can apply, and sure enough, neurobiologists tend to agree that some sort of feature extraction is hierarchically happening as the signal progresses through various sensory areas. Thus, we have a pretty solid picture, one that works in theory and is supported by vast evidence: the risk of endless recursion that you see doesn’t apply, because our architecture implies (starts from) the bottom level. The system we are trying to understand already has (metaphorically) “in mind the entities to which the rules apply” these are the incoming signals. From there, (progressively higher-level) feature extraction follows the route you are expecting (nested levels of ‘recognition’) and thus shouldn’t cause a problem. The picture however isn’t complete, because as you explain better than me, we can also learn to recognise new stuff. This is why I’ve concentrated on demonstrating that algorithms can indeed generate new classes in response to unrecognised stimuli.
    Therefore, recognition as it plausibly happens in our heads, can be algorithmic.
    What is still missing is fully formed meaning, one should (as Arnold quite correctly does) say that so far we have an algorithm that labels known stimuli and generates new labels for new ones, but doesn’t really “understand” what the labels stand for. This is true, but it’s besides the scope of Inexhaustibility. We agree that recognition does not bring meaning in one simple step, so I’m looking forward to your next post!

    In summary, I don’t think you need to change your overall route, I am not challenging it (not at this point). I’m focusing on one small point: you claim that recognition can’t be algorithmic because it’s anomic. This is a passage that can create you plenty of difficulties because if it’s not algorithmic people will start implying that you are taking a non-naturalistic/non-physicalist position (I do suspect the same, after all). I’m trying to convince you that it’s equally valid to say: “recognition does look anomic, and it’s difficult to see why it isn’t”, keep the rest of your argument intact, and avoid easy criticism as a result.
    All this may be entirely useless if you are indeed going to add some crucial metaphysical ingredient further down the line… If that’s the case, we’ll keep disagreeing.
    Does any of the above clarify my position?

    PS Arnold: indeed I’ve conflated meaning with recognition, I should have not, and wasn’t planning to. Thanks for keeping me on my toes and for the really nice link!

  17. 17. john davey says:

    well – unless human beings are angels, computation is a man-made, 100% organic product and a pretty basic one at that. Lacking as it does any relationship to meaning it’s still difficult to see how people take it so seriously as an analogue for the mind. Computation – like physicists are fond of saying about mathematics in general – is a tool to be used as appropriate and like all tools it requires a pre-packaged semantic-understanding human being to use it. Computation is no use to a dog or a cat and computer programs that speak russian are no use to people that don’t understand russian and turing’s design of a universal machine is of no use to a person with 10-year-old’s mathematics.

    Homo sapiens has evidently a great deal of fantastic cognitive capabilities, but these are a biological endowment. Instead of listening to computational science graduates giving ludicrous lectures about cognition, it might be an idea to let biologists understand something as basic as how a nematode can navigate, or how bees can communicate first. Something about a million times easier than understanding human brains, but seemingly way beyond current science. We are infatuated with physics i think, and are rooting around for reductionist solutions everywhere, believing that human inventions like mathematics are more real than reality.

  18. 18. Callan S. says:

    Hunt,

    I have to admit I suspect this is something like what people do, but the implication is that those people who say they can see the solution to non-computable problems by using higher level semantic reasoning are actually mistaken, that Roger Penrose is mistaken (!)

    That’s why I’d suggest a cognitive science study on it – if they are mistaken, set up a scenario like the tiles where the researcher knows it can’t be solved but close enough to solvable – and we test whether that consistantly triggers that mistake in subject and they say it can be solved.

    Perhaps there wont be a point where it is both unsolvable and yet tweaked to just look solvable enough to trigger that mistake consistantly?

    But if there is, then you can pretty much plant a flag at the very edge of human heuristics (for this particular endeavour). Where the world runs out.

  19. 19. Callan S. says:

    Drat, I failed at HTML! That first paragraph is not mine and is a quote from Hunt’s post. I should have just stuck with italics!

    Is that better? – Peter

  20. 20. Sci says:

    @John Davey: Good point about confusing mathematics for reality – at the extreme that’s given us fictions like the multiverse and Tegmark’s mathematical reality.

    Computationalism seems to have become a synonym for materialism, and idea I’ve seen perpetuated by both materialists and theists. Yet AFAICTell there’s nothing incredibly controversial to me about saying Turing Machines cannot capture all mental processes but people in both the aforementioned groups seem to think this translates to an argument for souls, gods, etc.

    It’s also interesting that Penrose started from an anti-computationalist position before he even talked to the anesthesiologist Hammeroff. Yet this position led to an insistence on quantum effects in biological systems, something researchers have only proven in the last few years.

    This isn’t to say discovery of quantum vibrations in the microtubules automatically makes Orch-OR correct – it remains unclear to me how it works to produce/evoke consciousness – but if we grade a theory by the predictions it makes in advance, it seems the computationalists have a lot of catching up to…

  21. 21. Hunt says:

    @Sergio #16

    Homunculus regress:

    http://en.wikipedia.org/wiki/Language_of_thought_hypothesis

    Daniel Dennett accepts that homunculi may be explained by other homunculi and denies that this would yield an infinite regress of homunculi. Each explanatory homunculus is “stupider” or more basic than the homunculus it explains but this regress is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation.[2] John Searle points out that it still follows that the bottom-level homunculi are manipulating some sorts of symbols.

  22. 22. Hunt says:

    @Sci:
    Computationalism seems to have become a synonym for materialism, and idea I’ve seen perpetuated by both materialists and theists. Yet AFAICTell there’s nothing incredibly controversial to me about saying Turing Machines cannot capture all mental processes but people in both the aforementioned groups seem to think this translates to an argument for souls, gods, etc.

    I think the argument goes that if brains are simulatable to arbitrary accuracy, and if simulated reasoning is genuine reasoning (as Peter argues persuasively in his book, the simulation is “all there is”), then human reasoning is bounded by computability. But people claim to be able to solve non-computable things, a contradiction. Either there is something wrong here, or these people are mistaken, as Callan suggests.

  23. 23. Sergio Graziosi says:

    @Hunt #21
    Yes. I wouldn’t personally bring-in the homunculus metaphor:
    I have in mind “really stupid” classifications, stacked up hierarchically, so anthropomorphising each or any one of them looks misleading to me. However, yes, the possibility of infinite regression, which seems a reasonable issue from the top-down perspective, dissolves when you start from the sensory stimuli and walk your way from the bottom up.
    Furthermore, the idea of a mentalese, as a general symbolic language used within the brain is quite correctly ridiculed by Dennett himself. From my perspective, looking at the issue with neurobiology as a guide, it’s quite natural to expect that each pathway will employ different and very local encodings. Also: there is no decoding (again following Dennett), not in the way we normally understand it, anyway. We could call mentalese the great sum of all local systems used to transmit different types of information: this would be analogous to call “human language” the overall set of all different languages, you can choose to do so, but is there any value in it?
    Searle isn’t wrong either, but I don’t see how having a bottom level that “manipulat[es] some sort of symbols” generates a problem: if I touch my skin, mechanical receptors get excited, and a particular pattern of spikes is sent towards the brain. That particular pattern can indeed be described as a symbolic representation of the received pressure. That’s biophysics 101, why should it be a problem?

    We are only toying around with easy problems, though: understanding how implicit symbols (the signal above) and post-hoc, higher level classifications (earlier comments) eventually become meaningful is an entirely different sport.

  24. 24. Sergio Graziosi says:

    @Hunt #22 (and @John #17, @Sci #20)
    Thank you, Hunt! I wish I had your ability to explain the problem so succinctly.
    Furthermore, sensory signals travel to the brain and motor signals come out. Therefore the computational metaphor does really look entirely appropriate*. Input (sensory signals) gets transformed into output (motor commands) within the brain. Hence, the brain transforms signals, and it should be possible to model it as a computer. It is reasonable to expect that if we knew exactly what a brain does, and if we could make a computer that is fast enough to simulate the same processes with enough precision**, then we could substitute the brain with the computer and nobody would notice.

    John, you seem to imply that proposing such a scenario is rooting around for reductionist solutions. I see this position as somewhat self defeating (while I agree with all your points, taken singularly). I do think that the vast majority (entirety?) of reductionist attempts of explaining how the mind is a product of the brain are presently over-reductionist. This doesn’t mean that the metaphor is wrong or useless, or that reductionism in general can’t solve this problem, it merely means that so far we haven’t found how to produce the required explanations.
    Trying to be fair, I don’t think anyone (be it Tononi, Koch, Dehane or Baars) would claim that they have finished the task, not even the much smaller aim of explaining the behaviour of an aplysia. Methodologically however, starting from over-simplistic models, finding out what’s missing, and adding more components/details/interactions gradually seems reasonable. It’s painfully slow, but it could work (and rescues the role of behaviourism, I regretfully admit).
    The crucial point is that we should never forget how complex/complicated is the system we are trying to explain, and remember that complexity is a reliable proxy for the difficulty of fully describing a system/process. I’ve written a full post on this, see also the comments for the obligatory discussion on reductionism.

    Notes
    *It looks entirely appropriate at first sight, but we are starting to see why it may turn out to work only up to a certain point. For starters, there is no clear boundary between brain and body, thus, if we admit (as we should) that the distinction is arbitrary, the theoretical solidity of the approach already starts to look less convincing.
    Furthermore, ideas like embodied cognition, where the environment, the body and behaviour all contribute to cognition are finding more and more supportive evidence. See here for a very clean and basic example: the studied larvae sort-of behave first, and correct their behaviour based of feedback collected from the environment. This doesn’t make the computational metaphor invalid, but it shows that we’ll probably need to consider also the interactions between body and the environment to start understanding what’s going on. IOW, it shows that the subject is even more complex.

    **If you ask me, I suspect that with current technology the power consumption of a fast-enough computer would be astronomical: in the region of the whole planet (man made) power production, or thereabout. I will spare you the wild guessing details.

  25. 25. Peter says:

    @Sergio #16

    recursion does have a bottom, and we expect to find it at the neuronal level

    Yes – but the bottom is a network, not another rule-driven system…

  26. 26. Arnold Trehub says:

    Peter: “Yes – but the bottom is a [neuronal] network, not another rule-driven system…”

    Exactly. This is the essential point.

  27. 27. Sergio Graziosi says:

    @Peter #25
    I guess I’ll need to be patient and wait for the next chapter: if the bottom does something but doesn’t follow rules, how does it operate? Wouldn’t at least the rules of physics apply? 😉

    Also: when I say that the bottom is at the neuronal level, I don’t mean the whole nervous system network (including but not limited to the brain). I mean that the bottom is made of the first layers that (somehow – many alternative interpretations exist, we don’t need to endorse one in particular) extract features from raw stimuli. For the record, this process starts with receptors themselves: many would respond to change much more than to absolute stimulus strength, for example.

  28. 28. Arnold Trehub says:

    Sergio: “… I don’t think anyone (be it Tononi, Koch, Dehane or Baars) would claim that they have finished the task, not even the much smaller aim of explaining the behaviour of an aplysia.”

    If the task is to provide a biological explanation of consciousness/subjectivity, then none of the above theorists has even begun to provide an explanation because they haven’t addressed the the fundamental problem of the perspectival nature of subjectivity.

    I have suggested that the science of consciousness is now where theoretical physics was at the time of the Copenhagen interpretation in the early 1920s. Much work remained to be done before there was a reasonable convergence on the standard model of quantum mechanics. At that time, the two-slit experiment was a critical finding for theory development. In subsequent years, the development of quantum theory was able to predict an increasingly wide range of puzzling phenomena. In a somewhat comparable sense, although in the domain of biology and phenomenology, the SMTT experiment [1] is a critical finding for theory development, and the retinoid model [1] that suggested the SMTT experiment is proving to predict a widening range of natural and experimental observations. It is unlikely that we will agree on all points, but the retinoid model, which does propose a brain mechanism that is competent to account for the perspectival nature of subjectivity, gives us a substantive basis on which to work on the further development of a theoretical model of consciousness.

    1.

    https://www.researchgate.net/publication/6909588_Space_self_and_the_theater_of_consciousness

    https://www.researchgate.net/publication/237044125_Evolution's_Gift_Subjectivity_and_the_Phenomenal_World

  29. 29. Scott Bakker says:

    Sergio: “I do think that the vast majority (entirety?) of reductionist attempts of explaining how the mind is a product of the brain are presently over-reductionist. This doesn’t mean that the metaphor is wrong or useless, or that reductionism in general can’t solve this problem, it merely means that so far we haven’t found how to produce the required explanations.”

    Besides, as far as I can tell, ‘reductionism’ is little more than a shibboleth. Everybody wants the simplest, most powerful answer. There’s something disingenuous about characterizing this virtue in pejorative terms ONLY where ‘simpler’ contradicts some desired thesis. It also has the effect of pinning the debate to a bunch of unresolvable metaphysical issues… thus providing more cover, it seems.

    Imagine defending Newtonian Physics against General Relativity on these grounds. Even if we had a workable, scientific theory of consciousness, one would hope that people would continue looking for simpler, more powerful theories. Why? Because they’re often there to be found.

  30. 30. Callan S. says:

    Is that better? – Peter

    Thanks, Peter! 🙂

  31. 31. Jochen says:

    Damn, I’m already late to the party, but I’ve enjoyed your book immensely and can’t resist the opportunity to pose a few question that came to my mind during reading.

    First of all, even though I’m a physicist, I think I generally agree with the role of noncomputable processes in the generation of mentality. I think there’s things in the world that elude formalism, and that our confusion between formalism and the things that it applies to, a kind of mistaking the map for the territory, is a significant obstacle to our understanding of consciousness.

    But I do have some problems with how you propose to use the notion of noncomputability. Starting with Penrose’s argument, or your own from tilings, neither actually establishes the role of noncomputability—basically, Penrose’s argument only establishes that either we’re not computers (speaking here very loosely on account of time saving), or that we are machines, but of a kind whose consistency we could never prove (which is probably the majority of machines), or we are machines whose consistency we can prove, but can never prove that we are such machines, or we are simply inconsistent. Or something else along these lines.

    The same goes for your tiling argument: that we can establish that certain aperiodic tilings can cover the plane, doesn’t mean that we in general are capable of doing so, and only this is established by reduction to the halting problem. In analogy, we can (and computers can, as well) solve the halting problem for many different programs, even for cases that don’t halt (consider simply an infinite loop—many compilers will in fact warn you if your program includes one, thus ‘solving the halting problem’ for this specific case). The undecidability of the halting problem merely implies that there is no general machine able to solve every instance; but not that every instance is unsolvable. So I don’t think that these kinds of arguments are sufficiently persuasive to convince the die-hard computationalist.

    And the arguments are, actually, not in any way about the limitations of certain kinds of artifacts, e.g. computers, but they are about (often self-referential) limits of consistent assertion. As such, one can apply them to humans, as well: take the sentence “Peter Hankins cannot consistently assert this sentence”, which is plainly true, and hence, a truth you cannot consistently assert, while I can easily do so. If such a failure thus constitutes evidence towards the fact that the party that can assert something has some inherent capabilities that the party that can’t do so lacks, as arguments of the Lucas/Penrose type insinuate, then I’d be in some sense computationally stronger than you. But of course, I’m not: you can easily create a similar sentence with my name substituted for yours. The same thus may be the case as regards machines unable to assert certain self-referential formal statements. (Indeed, any machine could easily assert the above sentence, thus proving it to be computationally superior to you.)

    In fact, I think that nothing that goes on in our thinking really is noncomputational—actually, for every train of thought I have ever entertained, I could write down just the procedure by which I arrived at some conclusion (in fact, all of science depends on this, because only if you can write down a sufficiently specified procedure by which to reach a conclusion can you write it in a paper, or a book, and expect others to follow your reasoning—otherwise, you are resigned to simply writing ‘I know this is true, but I can’t produce an argument for it’, which is really more relevatory than scienfic), which is equivalent to creating an algorithm accomplishing the same ‘thought process’.

    In your book, if I’m not misremembering, you discussed the example of the immune system producing essentially random responses to some outside threat, and keeping what works, intending this as a sort of example of the kind of inexhaustible processes you mean—but to me, this actually sounds paradigmatically computational, being basically an example of stochastic optimization. Think genetic algorithm: you randomly mutate candidate solutions in order to select out those that best fit the situation. This sounds, in fact, like just the sort of process capable of attacking the frame problem—given any sort of environment (or some internal representation thereof), generate candidate behaviours towards accomplishing a certain goal, then try them out (by simulation), discard those that perform badly, and mutate those that perform best, rinse, repeat. Sure, one tends to get stuck in local optima—but in fact, that seems to be just what occurs in humans, too. (See the above joke by Hunt regarding the mechanic.)

    OK, so this is already getting too long, but I wanted to mention another point, regarding the similarity between Hume’s problem of induction and problems of noncomputability: there’s in fact a way to see that they’re much closer, and indeed, in a certain sense equivalent. The key to this is Solomonoff’s formulation of induction: basically, the idea is that when faced with any sort of data, such as a string of bits, its most likely continuation is the simplest one possible (in a mathematical sense, essentially given by Kolmogorov complexity). That is, for any given sequence, those that are more complex are judged to be less likely—essentially a formalization of Occam’s razor. So basically, here you have a procedure that gives you, uniquely, a judgment about likely continuation of series of data.

    However, the problem is that in order to judge the likelihood of a continuation, you need a prior distribution—usually called the ‘universal prior’. And it’s easy to show that this prior is uncomputable, by reduction to the halting problem. Thus, the closeness between Humean induction and incomputability you noticed is in fact an equivalence (for a certain formalized account of induction, anyway).

  32. 32. Peter says:

    Thanks Jochen!

    I hope you’ll forgive me if I don’t engage too deeply now with the reasonable points you make about non-computability, partly because I don’t want the conversation to turn into another one about that (and partly because I need to spend more time thinking about it!). I think Penrose established persuasively at least that what’s going on in our minds is not in any recognisable formalism. I think it amounts to a kind of pointing, in fact: whether in the end that pointing yields to formalisation, or something we’d be tempted to call a formal system even if it isn’t like others we’ve entertained before… well, I think not, but I could be wrong and I wouldn’t lose too much sleep over it if I were.
    Good point about the immune system, yes, there are programming techniques like a ‘zombie wall’ that rest on throwing random variations at a problem: but within defined domains where solutions can be defined in advance (you could argue that that’s the case for the immune system; bonding with something is always a success – but then again we don’t know how bonding is going to happen).

    Very interesting point about Hume – thanks indeed. All new to me (and obviously pretty congenial!)

    Sorry about the scrappy and brief nature of this response; I didn’t want to leave it too long before coming back.

  33. 33. Sci says:

    @Hunt – Will have to finish the book to see if I agree simulations via programs can capture all that goes on with human reasoning. That said I find the idea that the “laws” of physics – which really are just extrapolations of observed regularites – can be simulated to be a far cry of actually representing an accurate version of reality. How could one simulate the quantum level on a computer without making assumptions about why we see what appears to be genuine randomness.

    Now whether this matters for explaining mental phenomenon is debatable, though if Penrose/Kauffman/Heisenberg-the-Younger/etc are right that it does I still wouldn’t be convinced this prevents eliminativism from being true. The brain being non-computable simply means this one tool in the entire history of our invention is inadequate to the task of properly modeling the mind, and even if consciousness is fundamental it would – AFAICTell – not necessarily mean our intentionlity is non-reducible regardless of even if – to go extremes – Idealism was true.

    No big loss to reject computationalism IMO, as AFAICTell this doesn’t reject whatever we’ve currently decided “physicalism” means. (I sorta agree with Chomsky the body of the mind-body problem isn’t well defined, but this post is already too long!)

  34. 34. Jochen says:

    No worries, Peter, I know I’m unfortunately prone to text-walling, so I don’t expect detailed point-by-point debates.

    Actually, I just remembered another point of data regarding the relationship between AGI and noncomputable or inexhaustible phenomena: Marcus Hutter’s AIXI agent, which can be rigorously shown to be capable of solving any given problem asymptotically as fast as the best special-purpose agent for the given task. That is, if there is some way to do what needs to be done in a given situation, then AIXI will figure it out in a time that’s not systematically longer than the time it takes a specially designed problem-solver capable of solving the given task.

    However, since it works based on Solomonoff induction, the AIXI agent (in this form, at least) itself is noncomputable, thus giving a useful hint into the direction that the problem of AGI may indeed need noncomputational resources to be solved. It’s not quite so clearcut, though: there exist computable approximations that will approach AIXI in the limit of infinite computing power, and may differ only within tolerance levels for human behaviour if given the computing power equivalent to a human brain to work with. But still, I think it’s an interesting connection.

    Regarding whether or not there actually is some noncomputational component to our thinking, I won’t belabor the point; but if there is, it just seems to be a damn shame to me that we don’t get to do any of the cool things that powers beyond those of a Turing machine ought to net us. For instance, many outstanding problems in mathematics can be brought into a form where they’re essentially questions about whether a certain program (tasked with finding a counter-example, say) ever halts (the Riemann hypothesis, for instance). So one might expect a hypercomputational mind to be able to solve them without too much trouble; yet we don’t seem to be able to do that.

    Another thing is the appearance of randomness in the world: certain algorithmically random sequences (skipping loads of technical details) can essentially be viewed as encoding the answer to the halting problem. Thus, to a mind that is capable of doing this, they would not seem random, but in fact be completely predictable. Yet, they appear random to us (or perhaps one could attack this empirically: publish some random sequence—perhaps generated physically, via quantum mechanical means—, and poll some sufficiently large sample of the populace on what the next bit should be; a success rate significantly greater than chance would point towards noncomputational thought processes).

    Anyway, my own intuition is that we’re in fact limited to the formalizable in our thought processes, and that that’s exactly why the hard problem is so hard—nature, not being so limited, simply is not fully comprehensible to the algorithmic mind, or at least cannot be brought within a single formal umbrella, but may need disparate, and to some degree incompatible, layers of description. But of course, that’s another matter entirels.

  35. 35. David Duffy says:

    As to Penrose’s arguments about human mathematicians:
    1) Avigad and Harrison Formally Verified Mathematics is a review of the trend that a computer proof will probably be the gold standard in the future (already impracticable for humans to check some proofs),
    2) Megill, Melvin, Beal (2014) On Some Properties of Humanly Known and
    Humanly Knowable Mathematics
    flip the general Lucas-Penrose argument as: the set of humanly known mathematical truths (at any given moment in human history) is finite, recursive, axiomatizable, Turing computable, and so either inconsistent or incomplete. They then argue this will true at any future point in history.
    3) the latest generation of automatic theorem provers use higher-order logic (recently LEO-II
    validated Goedel’s ontological proof of g-d ;)), and others can derive proofs via analogic argument (Licato et al 2012’s META-R system derives Goedel’s 1st incompleteness theorem from the Liar Paradox by analogo-deductive arguments). If I understand the point correctly, using higher-order logics (that can do meta-logic, embed modal reasoning etc), is one way of sidestepping the 1st incompleteness theorem, in that they can be inconsistent.

  36. 36. Jochen says:

    Some cool stuff there, David; I’m especially looking forward to taking a deeper look at the Licato et al paper. However, I’m not sure the argument in your point 2 is as strong as it seems at first brush—it’s true that at every point in time, human knowledge, and hence, human mathematical knowledge, is finite, and since every finite set is recursively enumerable, there thus exists a Turing machine capable of outputting the totality of human mathematical knowledge.

    But this is not the same as saying that therefore, humans, considered as mathematical knowledge producing entities, are necessarily computational, themselves—the statement would likewise be true if we were noncomputational entities, working away for a finite time. It’s sort of a lookup-table argument: for any finite time, we can imagine a giant lookup table perfectly reproducing a human being’s responses in, say, a conversation, and thus being capable of passing the Turing test.

    But this is in fact a red herring: the thing is that for any given such implementation, there exists a time index such that it will fail, which is presumable not true for humans. I.e., if we were to extend the testing time for such a Turing test, then a machine that would have passed the original will fail, while the human would merrily continue chucking out intelligent (or at least intelligible) responses).

    So, too, would it be if we were genuinely noncomputational entities: yes, for any finite time, we could then be emulated by an algorithmic entity, just because any nonrecursive function can be approximated computationally across arbitrary finite stretches. But this does not mean that the function is computable—it’s the behaviour in the limit, which will diverge from any computable function, that is important.

  37. 37. David Duffy says:

    Hi Jochen. I did feel slightly uneasy about the Megill et al argument, mainly because it reminded of Doomsday type ones:

    The cardinality of [the] set [of Known Mathematical Truths] will change – and specifically grow – as we come to know novel mathematical truths. But given some fairly reasonable assumptions, the cardinality of this set will be finite for all random minutes m, if m is in the past, the present, or even the distant future.

    So you are giving human mathematicians (in the Penrose version) a pretty abstruse power, in that there is always a Turing machine out there that can get to the same outcome (given that the mathematician has halted successfully). I feel the start of one of those incomprehensible-to-me oracle things coming on…

  38. 38. Partisan review: the Shadow of Consciousness | Writing my own user manual says:

    […] it allows Hankins to isolate a class of problems, lumping them together under the “Inexhaustibility” banner. This sort of problem “[deals] with lists of alternatives which are […]

  39. 39. blog says:

    Nice post. I used to be checking continuously this blog and I am
    impressed! Extremely useful info particularly the ultimate part 🙂 I care for such info much.
    I was seeking this certain information for a very lengthy
    time. Thank you and good luck.

Leave a Reply