OutputMassimo Pigliucci issued a spirited counterblast to computationalism recently, which I picked up on MLU. He says that people too often read the Turing-Church hypothesis as if it said that a Universal Turing Machine could do anything that any machine could do. They then take that as a basis on which to help themselves to computationalism. He quotes Jack Copeland as saying that a myth has arisen on the matter, and citing examples where he feels that Dennett and the Copelands have mis-stated the position. Actually, says Pigliucci, Turing merely tells us that a Universal Turing Machine can do anything a specific Turing machine can do, and that does not tell us what real-world machines can or can’t do.

It’s possible some nits are being picked here.  Copeland’s reported view seems a trifle too puritanical in its refusal to look at wider implications; I think Turing himself would have been surprised to hear that his work told us nothing about the potential capacities of real world digital computers. But of course Pigliucci is quite right that it doesn’t establish that the brain is computational. Indeed, Turing’s main point was arguably about the limits of computation, showing that there are problems that cannot be handled computationally. It’s sort of part of our bedrock understanding of computation that there are many non-computable problems; apart from the original halting problem the tiling problem may be the most familiar. Tiling problems are associated with the ingenious work of Roger Penrose, and he, of course, published many years ago now what he claims is a proof that when mathematicians are thinking original mathematical thoughts they are not computing.

So really Pigliucci’s moderate conclusion that computationalism remains an open issue ought to be uncontroversial? Surely no-one really supposes that the debate is over? Strangely enough there does seem to have been a bit of a revival in hard-line computationalism. Pigliucci goes on to look at pancomputationalism, the view that every natural process is instantiating a computation (or even all possible computations. This is rather like the view John Searle once proposed, that a window can be seen as a computer because it has two states, open and closed, which are enough to express a stream of binary digits. I don’t propose to go into that in any detail, except to say I think I broadly agree with Pigliucci that it requires an excessively liberal use of interpretation. In particular, I think in order to interpret everything as a computation, we generally have to allow ourselves to interpret the same physical state of the object as different computational states at different times, and that’s not really legitimate. If I can do that I can interpret myself into being a wizard, because I’m free to interpret my physical self as human at one time, a dragon at another, and a fluffy pink bunny at a third.

But without being pancomputationalists we might wonder why the limits of computation don’t hit us in the face more often. The world is full of non-computable problems, but they rarely seem to give us much difficulty. Why is that? One answer might be in the amusing argument put by Ray Kurzweil in his book How to Create a mind. Kurzweil espouses a doctrine called the “Universality of Computation” which he glosses as ‘the concept that a general-purpose computer can implement any algorithm”. I wonder whether that would attract a look of magisterial disapproval from Jack Copeland? Anyway, Kurzweil describes a non-computable problem known as the ‘busy beaver’ problem. The task here is to work out for a given value of n what the maximum number of ones written by any Turing machine with n states will be. The problem is uncomputable in general because as the computer (a Universal Turing Machine) works through the simulation of all the machines with n states, it runs into some that get stuck in a loop and don’t halt.

So, says Kurzweil, an example of the terrible weakness of computers when set against the human mind? Yet for many values of n it happens that the problem is solvable, and as a matter of fact computers have solved many such particular cases – many more than have actually been solved by unaided human thought! I think Turing would have liked that; it resembles points he made in his famous 1950 essay on Computing Machinery and Intelligence.

Standing aside from the fray a little, the thing that really strikes me is that the argument seems such a blast from the past. This kind of thing was chewed over with great energy twenty or even thirty years ago, and in some respects it doesn’t seem as important as it used to. I doubt whether consciousness is purely computational, but it may well be subserved, or be capable of being subserved, by computational processes in important ways. When we finally get an artificial consciousness, it wouldn’t surprise me if the heavy lifting is done by computational modules which either relate in a non-computational way or rely on non-computational processing, perhaps in pattern recognition, though Kurzweil would surely hate the idea that that key process might not be computed. I doubt whether the proud inventor on that happy day will be very concerned with the question of whether his machine is computational or not.

26 Comments

  1. 1. Jorge says:

    Can someone please give me an example of “non-computational processing”? What does that even mean?

  2. 2. haig says:

    >”In particular, I think in order to interpret everything as a computation, we generally have to allow ourselves to interpret the same physical state of the object as different computational states at different times, and that’s not really legitimate.”

    Well, it might be important to make the point that pancomputationalism does not imply everything is a *universal* computer at all times, but has the potential of universality if configured into the appropriate organization. When nature computes it is computing specific computations, not all possible computations. The act of interpreting the same physical state into different computational representations at different times is itself a computation performed by the observer, not something inherent in the physical state.

    @Jorge
    It’s probably better to say non-turing computable or, synonymously, super-turing computation (or hypercomputation). Turing was aware of this possibility and writes a lot about oracle machines that can be queried, but not implemented, by turing machines

  3. 3. Jorge says:

    Haig, thanks. I had heard of hypercomputation, but it’s still largely considered hypothetical (to put it mildly).

    Also, I agree with your comment above. I can think of several examples where things not usually thought of as computers can be configured in a state that allows computation. Ever seen the Minecraft turing machine? It’s hilarious.

  4. 4. haig says:

    Forgot to add:

    >”But without being pancomputationalists we might wonder why the limits of computation don’t hit us in the face more often.”

    Actually, the limits of computation pretty much shape the world we see to a large extent. Limits to computability like the halting problem and limits to computational complexity like non-deterministic polynomial time algorithms describe a world that computes, but within constraints, such that it evolves in hierarchical ‘stages’ (for lack of a better word) through time instead of everything happening all at once (or allowing ‘peaking’ into future states not entailed by the upper limit of the process of evaluating from the current state). I think it’s actually quite a profound set of ideas.

  5. 5. scott bakker says:

    Aside from Copeland’s corrections, it all strikes me as another example of one philosopher excoriating another’s unexplained explainers. Nobody knows what they’re talking about at some level, and this is as true of fundamental physics as anything else. What’s a quantum ‘field’? What’s a quantum ‘particle’? Are we just talking about mathematical instruments, or is there some kind of coherent ontology we can grasp? Is there any way to answer this question?

    Unexplained explainers are simply an occupational hazard. *Of course* computationalism, causation, information, mechanism, and so on are plagued with questions. I’m not convinced those questions are as pointed as those plaguing ‘spooky emergence,’ however. Healthy skepticism of metaphysics of any kind is what the lesson should be, not that computationalism is a naked Emperor. We live in a nudist colony, after all!

  6. 6. haig says:

    @Scott

    >”it all strikes me as another example of one philosopher excoriating another’s unexplained explainers.”

    I’ve pointed this out before, but I want to make clear again how pernicious I think this type of critiquing is. You can always find fault in anything if you hold onto your view that the ‘explainers’ are unexplained themselves and hence suffer from fatal limitations that leave us intellectually stranded. I called it Nihilistic previously, but that may have been overstating things, it is, however, a form of radical skepticism which leads to relativism and all the other problems of deconstructivist post-modernism. The more I think about it, the more similar it seems to solipsism, both types of reasoning are where you end up with radical skepticism. Nobody takes solipsism seriously, that should be a clue.

    >”Nobody knows what they’re talking about at some level, and this is as true of fundamental physics as anything else. What’s a quantum ‘field’? What’s a quantum ‘particle’? Are we just talking about mathematical instruments, or is there some kind of coherent ontology we can grasp? Is there any way to answer this question?”

    Physicists know exactly what they’re talking about. The ‘unreasonable effectiveness of mathematics’ in describing the natural world shows us that nature has organization to it, our brains evolved the capability to represent nature’s order through the language of mathematics, so math is both a formalization of our advanced ability for numeracy as well as an abstract representation of how the universe is organized *for real*. In a sense, both the mathematical formalists and the platonists are right in their own way.

    Particles and fields have precise mathematical definitions based on rigorous empirical data. Imagine a projectile being shot out of a canon, see the projectile float along in an arc and landing some distance. Now, we can quantitatively analyze this phenomenon with mathematics and create equations that perfectly describe it. We can use these equations to graph the future positions, or simulate it in a computer, and thus recreate the ‘real world’ via abstraction. The more detail we put into our equations and simulations, the closer we get to actually recreating the real world phenomena. In the case of quantum physics or black holes etc, we don’t see the arc of a flying projectile with our eyes, we see the side-effects like particle spectra or gravitational lensing, but that is enough for us to do the same thing we did with the canon, we build up abstract models and converge on objective reality.

    >”I’m not convinced those questions are as pointed as those plaguing ‘spooky emergence,’ however.”

    I don’t want to misinterpret you, so can you explain your definition of ‘spooky emergence’.

  7. 7. scott bakker says:

    “I’ve pointed this out before, but I want to make clear again how pernicious I think this type of critiquing is. You can always find fault in anything if you hold onto your view that the ‘explainers’ are unexplained themselves and hence suffer from fatal limitations that leave us intellectually stranded. I called it Nihilistic previously, but that may have been overstating things, it is, however, a form of radical skepticism which leads to relativism and all the other problems of deconstructivist post-modernism. The more I think about it, the more similar it seems to solipsism, both types of reasoning are where you end up with radical skepticism. Nobody takes solipsism seriously, that should be a clue.”

    Not sure where how the inference to solipsism works. What you call ‘radical skepticism’ I simply call cognitive humility, which I think is amply justified by what cognitive psychology has shown. There’s nothing ‘post modern’ about it. We are horrible at theoretical cognition, period, for a variety of reasons that are becoming more clear. We pretty clearly seem to be hardwired to overcommit to claims, for one! But the list of human cognitive shortcomings is getting longer every day. Thus the need for science and its institutional and methodological prostheses.

    Saying we are saddled with unexplained explainers (as we pretty clearly are) isn’t the same as saying we’re a bunch of ‘know-nothings,’ only that we’re NOT a bunch of ‘know-everythings,’ or that at every turn of human inquiry we are confronted with profound mysteries necessitating the employment of unexplained explainers. I don’t see this as all that controversial (and this is my complaint against Pigluicci: that his thesis is trivial). Pointing out all the things that physicists know, haig, doesn’t change the fact that they themselves can’t find any consensus on their ontology. Are they all Copenhagenesque instrumentalists (which could itself be read as a skepticism)? Not by a long shot.

    “I don’t want to misinterpret you, so can you explain your definition of ‘spooky emergence’.”

    Craver’s usage, basically. There’s ‘weak emergence,’ which simply refers to the way organization generates novel causal effects, how the ‘whole is more than the sum of the parts’ in biomechanism. There’s ‘epistemic emergence,’ which refers to the way complexity renders a system’s behaviour unpredictable. And then there’s ‘spooky emergence’ of the kind you find in philosophical attempts to naturalistically square the phenomenal and intentional circles: unprecedented ‘mental properties’ somehow just ‘emerge.’

  8. 8. haig says:

    @Scott

    > “Not sure where how the inference to solipsism works. What you call ‘radical skepticism’ I simply call cognitive humility, which I think is amply justified by what cognitive psychology has shown. There’s nothing ‘post modern’ about it.”

    Cognitive humility is needed, yes, but I feel you extend it past the point of being useful or even justified. One can doubt everything except their own thoughts a la Descartes, but then to extend that doubt to the point of solipsism by therefore denying everything except one’s own thoughts goes too far. In the same way, we can learn from the cognitive deficits that science has shown we have, but to use those findings to claim that those deficits are responsible for the metaphysical mysteries we are confronted with, again, goes too far.

    Furthermore, doubting our ability to ever understand ontological truths has similar implications to relativistic doubts of our ability to form objectively justified true beliefs in general. Our understanding of what reality *is* plays a crucial role in how we live (yes, I’m objecting to Hume’s is/ought distinction and all the scientists/secular humanists who abide by it), which is why I brandished the postmodern label.

    > “Pointing out all the things that physicists know, haig, doesn’t change the fact that they themselves can’t find any consensus on their ontology.”

    Give them time. Again, jumping to the conclusion that we can’t find consensus because of our cognitive limitations, instead of acknowledging that it’s a work in progress, is premature (and I can argue that evidence shows it is wrong).

    > “*Of course* computationalism, causation, information, mechanism, and so on are plagued with questions. I’m not convinced those questions are as pointed as those plaguing ‘spooky emergence,’ however.”

    I agree. Computation et al fit the picture of the world we’ve been discovering, ‘spooky’ emergence, however, sticks out like a sore thumb. It’s a skyhook, those others are cranes.

  9. 9. scott bakker says:

    haig: I’m not convinced the disagreement between us is all that trenchant then – in general. I certainly never meant to imply that any single mystery is essential, only that answers inevitably raise questions, and, once again, the necessity of unexplained explainers. Where you and I part ways is on the issue of the heuristic nature of human cognition: I’m much more pessimistic than you are when it comes to our ability to make ontological sense of fundamental physics AS WELL AS conscious cognition/experience simply because heuristics entail environmental adaptation, or ‘problem ecologies,’ that are far different from those we’re tackling in either realm.

  10. 10. haig says:

    scott:

    I agree with you on where the dispute lies. I can definitely see where you’re coming from, but I still reject that heuristic view, not because it is inconsistent or prima facie incorrect, but because I think I’ve reached a different and more correct view. Only cranks will defend themselves without clear and detailed papers which have been verified by the appropriate experts, so I guess I’ll have to agree to disagree for now.

  11. 11. haig says:

    Scott:

    Just for clarification, I was calling myself, not you, a crank if I pressed the issue further without going through the proper academic channels first (which I’m slogging through currently).

  12. 12. scott bakker says:

    I prefer ‘inspired crank’! I’m a novelist for a reason ;)

  13. 13. VicP says:

    Scott: Philosophy, mathematics, chemistry, engineering and physics are precise systems of language. Languages shared amongst social groups are essentially agreed ways that we are all programming our brains when we interact amongst our groups. The object may be that the brain itself is only a well programmable object that sits atop a universal “I” in the body. Novelists are freer thinkers that get to cross the social group languages in the brain creating all type of interesting stuff for wider audience consumption.

    Of course there’s the fellow on YouTube who won’t read your novels because you spend the opening chapter building a complex base of language and characters (language) required in order to read the rest of the novel.

  14. 14. Charlie Chapple says:

    Another fascinating article. The debate, though old and chewed out, remains worth discussing. Computers are a good comparison to the mind I’d say, but not anywhere near an exact one. We can compare the two, and computers can give us a bit of insight on how some properties of our brain work. But ultimately I would agree that pure computation, no matter how advanced, could result in something that functions just like a human brain. Computation is just too one dimensional.

    Also, between haig and Scott Bakker, would it be too much of a cop out to say both views are actually beneficial to philosophy? The ‘pernicious’ critique can definitely be offensive in that it ridicules someone’s honest hard work, but simultaneously can help encourage perspective and creativity. Though sometimes too much of this ‘perspective’ can make it impossible to say anything at all. At the same time, attacking a problem like you can solve it, know it and understand it, no matter how convoluted or abstract, can lead to some pretty insightful and fascinating discoveries. However, there can also be an issue of developing too narrow a view.

  15. 15. Callan S. says:

    Jorge, for some reason it existentially disturbs me to see sunsets and dawns upon that turing machines circle – it busily working away on ‘what’s important’, utterly unaware of the massive cycle of day and night behind and all around it.

  16. 16. Joe Duncan says:

    Possibly this is a bit aside, but in discussions about computational theories of mind, there’s always a giant pink elephant in the room that no one ever seems to address.

    Namely confusion between “computational” and “analytical”, and equivocation on what it means to “solve” a problem.

    The most common argument against computationalism is essentially that humans can solve non-computable problems, therefore the brain can’t be computational. I am constantly boggled by the fact that no one ever seems to address this equivocation.

    All it means for a problem to be non-computable is that computational processes cannot provide an analytical answer. That is, they can’t arrive at a 100% guaranteed correct solution. Nowhere is it implied that computational processes can’t provide ANY answer at all to non-computable problems.

    Indeed there are a great many 100% computational algorithms which CAN provide “good enough” solutions to non-computable problems.

    The argument that the brain can’t be a computational device because human beings can come up with answers to non-computable problems rests on two very faulty premises:

    1) that the ONLY solutions which can be provided by computational processes are analytical ones

    AND

    2) that ALL the solutions provided by human beings are analytical

    I can’t see any way in which both of those premises aren’t trivially false.

    1) is demonstrably false, as we’ve already created computational algorithms which can provide answers to non-computable problems.

    2) is also demonstrably false, as it’s a known fact that people come up with sub-optimal or “good enough” solutions to problems all the time.

  17. 17. Peter says:

    Joe – good points; actually I think Turing himself made some similar ones in his 1950 paper.

  18. 18. haig says:

    @Joe

    > “All it means for a problem to be non-computable is that computational processes cannot provide an analytical answer. That is, they can’t arrive at a 100% guaranteed correct solution.”

    Not exactly, the halting problem is the original exemplar of an uncomputable decision problem, and an answer is not possible, not just because of precision or uncertainty, but because it is provably unattainable. Unless, of course, you loosen your criteria for acceptable answers to 50/50 chance, which is not an answer at all.

    > “Indeed there are a great many 100% computational algorithms which CAN provide “good enough” solutions to non-computable problems…1) is demonstrably false, as we’ve already created computational algorithms which can provide answers to non-computable problems.”

    Wrong, you have misunderstood what ‘non-computable’ means.

    > “The argument that the brain can’t be a computational device because human beings can come up with answers to non-computable problems rests on two very faulty premises…”

    No, those are not the premises that underly the argument. To simplify, people like Penrose who claim certain processes in the mind are uncomputable are talking about what Hofstadter referred to as ‘jumping outside the system’. A formal system past a certain capability threshold has limitations on what it can prove internally, but humans seem to be able to see through these limitations. The arguments for and against this have merit and it remains controversial.

  19. 19. haig says:

    @Peter

    Yes, Turing covered some of this in his 1950 paper, but the argument he made was to show that humans were infallible, and computers need not be fallible to emulate human behavior. Penrose style arguments claim that humans, though infallible, are still in fact more powerful than turing-complete machines through the uniquely human intuitive sense that underlies certain creative endeavors, meta-mathematics being his main example.

  20. 20. haig says:

    ^edit for above comment:

    should read: “…to show that humans were *NOT* infallible, and computers need not be infallible…”

  21. 21. Joe Duncan says:

    >…an answer is not possible, not just because of precision or uncertainty, but because it is provably unattainable…

    I think you misunderstand the halting problem. You are equivocating on the meaning of “answer” here (precisely the equivocation I was talking about). The only answers to non-computable decision problems which are “provably unattainable” are ones which are *always correct*; “non-computable” applies only and specifically to decision answers qualified as such. This distinction is important. It is simply untrue that “an answer is not possible”. Like I mentioned earlier, it has already been done. People have created and executed computational processes which provide answers to non-computable decision problems (like the halting problem) which will give you a correct answer MOST of the time, but are sometimes wrong (I’m not talking probabilistically – for most of the inputs you give them, they will deterministically give you the correct answer, for some they will be wrong). This is established, it’s not something I need to demonstrate. Google “Termination Analysis”.

    >Wrong, you have misunderstood what ‘non-computable’ means.

    No, I understand it quite well. You don’t seem to understand my point. As an aside, I say “seem” because it is generally considered impolite to make bald assertions of someone’s level of knowledge, without evidence, because you disagree with what they say. Maybe you actually do understand my point – it doesn’t appear so – but I’m giving you the benefit of the doubt and saying “seem”.

    >No, those are not the premises that underly the argument. To simplify, people like Penrose who claim certain processes in the mind are uncomputable are talking about what Hofstadter referred to as ‘jumping outside the system’. A formal system past a certain capability threshold has limitations on what it can prove internally, but humans seem to be able to see through these limitations.

    You realize that this is a self-contradictory paragraph? You first claim the two premises I mention aren’t the ones that underly the argument, and then you simply paraphrase and restate them.

    >…certain processes in the mind are uncomputable…
    >…humans seem to be able to see through these limitations…

    This is equivalent to saying that the mind can give answers to non-computable questions. On top of that, the only way this is a meaningful, or non-trivial, statement is if the strong sense of “answer” is being used here: i.e. that the processes in the mind can give answers to non-computable questions which are qualified with “always” and “correct”. As I mentioned earlier, this distinction is important because it’s the defining line between “computable” and “non-computable”. Saying that the mind can provide answers to non-computable questions that are usually correct does not exclude the mind being computational, computers can do so as well.

    >…formal system past a certain capability threshold has limitations on what it can prove internally…

    You are talking about Gödel’s First incompleteness theorem here. The Turing machine + halting problem essentially demonstrate Gödel’s theorem. A Turing machine is a formal system (in fact Gödel himself saw Turing machines as the most canonical definition of formal systems) and the halting problem can be represented as an expression to be proven in that system. The fact that no general computatioal algorithm exists which can always provide a correct answer to the halting problem is an example of an expression (the halting problem) which can’t be proven (give an always correct answer) within a formal system (the Turing machine). Simply because *every* true statement within a formal system cannot be proven internally in no way means that *most* true statements within a formal system can’t be proven (proven meaning generated by deriving from the base axioms and inference rules).

    If you want to use the Church-Turing thesis or Gödel’s first incompleteness theorem to show that the mind cannot be computational (i.e. to falsify the computational theory of mind), then you need to demonstrate that the human mind can provide answers which a computational process (ontologically) cannot.

    In computational speak, you need to show that the mind can provide always correct answers to non-computable problems, or failing that, that computers can’t provide the same kind of answers to non-computable problems that people can. Neither of these things have ever been demonstrated, and there’s a lot of evidence that both are false.

    In formal systems speak, to show that “humans seem to be able to see through these limitations” you need to show that the mind can prove (generate) *every* true statement, without generating false ones, in formal systems for which a computer cannot (because that’s what “limitation” means in this sense). If you can’t do that, then you haven’t shown that humans “see through these limitations”. Again, it is far from certain that human beings can do this, and good reason to be believe we cannot.

    Talk of human beings being: “more powerful than turing-complete machines through the uniquely human intuitive sense” is question begging. If you want to prove that the mind is not computational, you can’t simply point at something the mind does (intuition) and claim it’s not computational – that’s what you’re trying to show! You need to demonstrate that “intuition” can solve non-computable problems in the strict sense which defines the difference between computable and non-computable.

  22. 22. haig says:

    @Joe

    Apologies if I came off impolite, I was in a rush.

    > “People have created and executed computational processes which provide answers to non-computable decision problems (like the halting problem) which will give you a correct answer MOST of the time, but are sometimes wrong…”

    Yes, the halting problem is undecidable in general, given arbitrary inputs (though Chaitin has a nifty way to compute the first digits of the halting probability), and individual cases can be amenable to analytical solutions, which is what program analysis techniques like termination analysis you mentioned do. However, in order for those techniques to work, the computer scientist has to specifically tailor the analysis to the particular problem instance, using specific knowledge of the algorithm in question. The general thrust of the halting problem still stands that there is no general way to mechanically decide on all inputs, and in this general form it is an existence proof of a class of formally undecidable problems. I’m using this general form to show that uncomputable problems provably exist in theory, whether or not there are physical processes that belong in that class is an open question, holding to the strong church-turing thesis would a priori deny their existence, Penrose has claimed an interpretation of the measurement problem in quantum physics does belong in this class, and, furthermore, brains exploit this property. Saying that there are heuristics that solve certain instances of the halting problem just shifts the goal post by reducing the original problem to solvable instances, it does not take away from the implications of the original proof, which is what I was getting at.

    > “You realize that this is a self-contradictory paragraph? You first claim the two premises I mention aren’t the ones that underly the argument, and then you simply paraphrase and restate them.”

    Maybe I was unclear, I apologize. Your two premises stated that 1) the only solutions which can be provided by computational processes are analytical ones, and 2) that ALL the solutions provided by human beings are analytical. Premise #1 is correct, assuming turing-equivalent machines; again, by using heuristics like program analysis you reduce the general halting problem to specific solvable instances, you have NOT demonstrated that uncomputable problems are solvable. Premise #2 is not correct, but it is also not an assumption underlying the non-algorithmic mind argument, in fact, the argument assumes that there must be solutions provided by humans that are not directly analytical.

    > “Saying that the mind can provide answers to non-computable questions that are usually correct does not exclude the mind being computational, computers can do so as well.”

    The mind cannot provide answers to non-computable questions, humans are as limited as computers in this regard. This is where I part ways with Penrose and alter his ideas a bit. Penrose says there are mathematical problems that humans solve which computers cannot because of computability limitations. What I’m saying is that computers can’t solve those problems because of *computational complexity* limitations, the only reason why humans can solve them within polynomial time is because minds take advantage of non-computable physical processes which allow them to collapse the search space into a tractable path.

    > “If you want to use the Church-Turing thesis or Gödel’s first incompleteness theorem to show that the mind cannot be computational (i.e. to falsify the computational theory of mind), then you need to demonstrate that the human mind can provide answers which a computational process (ontologically) cannot.
    In computational speak, you need to show that the mind can provide always correct answers to non-computable problems, or failing that, that computers can’t provide the same kind of answers to non-computable problems that people can.”

    As I previously stated, my version of the argument differs from Penrose in that it does not assume humans are solving non-computable problems, it assumes that our brains are exploiting non-computable physical processes allowing them to solve certain computable problems in polynomial time which computers would find intractable within the universe’s given space/time bounds.

    > “You need to demonstrate that “intuition” can solve non-computable problems in the strict sense which defines the difference between computable and non-computable.”

    So I’ve altered Penrose’s claim from minds solving non-computable problems to minds using non-computable physical processes in order to make certain computational problems tractable in polynomial time. ‘Intuition’ is, admittedly, an ambiguous term, but my argument is that subjective experiences that underly the creative/aesthetic/insightful (call it what you will) processes that minds use to solve such problems are ‘intuitions’ that arise out of non-computable physical processes. Furthermore, ‘intuition’ is just a part of the full repertoire of conscious subjective experiences of qualia (ie the hard problem) that arises out of these non-computable physical processes.

  23. 23. Vicente says:

    Haig,

    minds take advantage of non-computable physical processes which allow them to collapse the search space into a tractable path

    Could you please elaborate just a bit on this, and provide an example. Thank you.

  24. 24. haig says:

    @Vicente

    The possibility space of nth-ordered axiomatic systems, which allow statements to be proven in their respective n-1 ordered systems, is not searchable in polynomial time, so though finite turing-equivalent algorithms are powerful enough to conduct the search, they won’t find the proof given the space/time bounds of the universe. Humans, however, creatively construct these nth-ordered systems of axioms and find these proofs relatively quickly. What makes humans do this are subjectively experienced intuitions (an aspect of consciousness) which themselves are non-algorithmic, and based on classically uncomputable physics.

  25. 25. Tomas says:

    I see there is a lot of discussion on the internet about the application of Gödel’s first incompleteness theorem (non-computability) to human reasoning, but what about its application to qualia? Qualia and Gödel sentences (from Gödel’s first incompleteness theorem) seem to share a baffling property: their existence depends on a system but they cannot be derived from this system. Namely, Gödel sentences are defined as follows:

    Gödel sentences are true statements constructed in the language of theory T but they cannot be derived from theory T.

    Maybe we could generalize the concept of “theory” to include not just a mental/logical system but any system (including physical) and rephrase the definition:

    Gödel ENTITIES are REAL ENTITIES constructed in the language of SYSTEM S but they cannot be derived from SYSTEM S.

    If we regard qualia as Gödel entities, the system S could be the brain (a system of neuronal firings) and qualia would be constructed “in the language” of the brain (that is, out of neuronal firings) but not derivable from the brain (the system of neuronal firings). Qualia would then be new axioms that are both inside the brain and outside the brain. Could this be possible?

Leave a Reply