A somewhat enigmatic report in the Daily Telegraph says that this problem has been devised by Roger Penrose, who says that chess programs can’t solve it but humans can get a draw or even a win.

I’m not a chess buff, but it looks trivial. Although Black has an immensely powerful collection of pieces, they are all completed bottled up and immobile, apart from three bishops. Since these are all on white squares, the White king is completely safe from them if he stays on black squares. Since the white pawns fencing in Black’s pieces are all on black squares, the bishops can’t do anything about them either. It looks like a drawn position already, in fact.

I suppose Penrose believes that chess computers can’t deal with this because it’s a very weird situation which will not be in any of their reference material. If they resort to combinatorial analysis the huge number of moves available to the bishops is supposed to render the problem intractable, while the computer cannot see the obvious consequences of the position the way a human can.

I don’t know whether it’s true that all chess programs are essentially that stupid, but it is meant to buttress Penrose’s case that computers lack some quality of insight or understanding that is an essential property of human consciousness.

This is all apparently connected with the launch of a new Penrose Institute, whose website is here, but appears to be incomplete. No doubt we’ll hear more soon.

33 Comments

  1. 1. Luís Ferreira says:

    I’m not fond of ambiguous sentences. “chess programs can’t solve it” (right now) shouldn’t be confused with “chess programs can’t solve it” (never, ever, due to their nature). While the former may be true, the latter is bound to become false, given enough time.

    If only people knew computers a bit better… most people are still back in the times of “computers can only do what you tell them to do”.

  2. 2. Jochen says:

    On the face of it, that’s a somewhat bizarre claim—at least, as usually understood: it’s clear that a sufficiently powerful computer will eventually find the solution, just by brute force; so there’s nothing in principle uncomputable about the puzzle (so it’s not some chess analogue of Penrose’s Gödelian argument).

    Instead, it’s argued that the brute-forcing strategy will quickly exhaust all available computational resources, which may well be a true claim—but then, this merely tells us that human brains don’t work by brute forcing through all options. Which of course nobody claims they do.

    In the end, if somebody finds a solution, then it’ll be something that can be written down, and proven, in a short series of steps. But then, at least in principle, a computer can likewise find that proof. Perhaps not a traditional chess program; but that then just tells us something about the way we write chess programs, not about the difference between human and artificial intelligence.

    (Of course, this ignores that no computer ever really ‘plays chess’; rather, there are some computational processes whose states we can interpret as chess moves. But that’s another matter.)

  3. 3. Hunt says:

    I think the computational remedy of conundrums like this is something like the distinction of higher order logics to predicate and first order logic, that is logics that account for statements about statement (about statement…etc.) You can’t simply reason (in the computational sense) about problems. A machine must have the capacity to reason about reasoning, or reason about reasoning about reasoning, etc. Only then can things like “hopeless outcome” or “it is not worth continuing” be formalized.

  4. 4. bob cousins says:

    > this merely tells us that human brains don’t work by brute forcing through all options.

    That is the only point of this puzzle.

    However, the real point of the puzzle is that it is a “publicity stunt” for the launch of the new Penrose Institute. In that respect it has some degree of success, people are talking about it.

    In a context of Dawkins and memes, the puzzle takes a different light. The real power of human insight is to apply new contexts.

  5. 5. Callan S. says:

    If it’s already in a draw state, then why would it be remarkable that humans can get a draw state?

    And how does he know humans can get a win? I presume a genuine win, not in a ‘this game sucks, I’m throwing the match’ way

  6. 6. David Bailey says:

    Assuming that existing chess programs do not recognise this particular bizarre situation, I think what this illustrates is that given any particular failing of an AI program, it is always possible to graft on a little extra code to eliminate that one deficiency.

    The problem is that a collection of ways of solving particular problems, is not the same as actual understanding – even if it can fool people into believing that the program understands what it is doing.

    Let’s imagine a car that can drive itself, and imagine for a moment that we mean the real thing – a car that would drive form a house in the suburbs, all the way to another location (i.e. not a car that would only drive on specially prepared highways). A moment’s thought will show that the task of writing such a program is totally open ended. For example, suppose that while driving, I see a lorry with a really badly secured load, I probably recognise that situation and pull back or try to overtake the vehicle. I once met a riderless horse loose on a motorway, etc etc. The number of such situations is essentially limitless. Driving, like so many other tasks, requires an ability to apply our understanding of the real world – not just an ability to follow particular algorithms – even though these may seem to approach the performance desired.

    This is what Penrose’s example illustrates – a program that doesn’t really understand (which maybe includes all computer programs) can always be fooled with carefully crafted examples.

  7. 7. Michael Murden says:

    To Jochen – I’ll bite. What’s the difference between what computers do when they ‘play chess’ and what humans do when they play chess?

    To David Bailey – Yes, but in fairness humans can also be fooled by carefully crafted examples.

    This reminds me of something from a few posts ago about Doug Lenat and the CYC project. The idea of teaching computers common sense seems to be based on the idea that we know to some extent how humans acquire common sense and the ability to apply it to their interaction with the world. We don’t know even a little bit (from a neuromechanical perspective) about how humans do this, so I would agree that attempts to incorporate the kind of common sense that would allow humans to understand chess positions that never occur in actual chess games are premature, but to claim that doing so is perpetually impossible is to ascribe supernatural characteristics to human mentation. I think that’s premature as well.

  8. 8. David Bailey says:

    To Michael Murden

    Agreed – we can be fooled, but it is worth remembering that that was a fairly simple chess problem – you didn’t need to be anywhere near the Grandmaster level at which these programs play, to understand it!

    This is an example of what is referred to as the brittleness of traditional AI (as opposed to artificial neural nets) and it is a common problem. Each time the AI program can be patched to fix a specific issue, but other problems turn up – rather reminiscent, I think, of the way the set of axioms in Gödel’s theorem can be expanded to deal with an undecidable statement, but that doesn’t stop other undecidable statements cropping up indefinitely.

    I think this does illustrate that just because a program can play chess, it does not mean that it understands chess – just as a program doing anything – solving quadratic equations, for example – does not understand what it is doing. For that reason, I don’t think it is reasonable to expect that a computer program playing brilliant chess tells us anything about human (or animal) consciousness.

  9. 9. Jochen says:

    What’s the difference between what computers do when they ‘play chess’ and what humans do when they play chess?

    Well, a computer program stands to a game of chess in the same relation as a text does to its content: it needs to be interpreted in the right way, while mental content doesn’t (on pain of circularity). Saying ‘this device computes x’, like saying ‘this text describes y’, always involves an act of interpretation that, in principle, could be different—we could imagine a language such that the text makes perfect sense in that language, but describes something else entirely; and likewise, we could imagine an interpretation such that the computer doesn’t play chess, but does something else entirely. The interpretation of the text or the machine states aren’t intrinsic, but at least to some degree arbitrary, in contradistinction to mental states, which at least appear to have definite content.

  10. 10. VicP says:

    So what is the goal for a computer to solve the chess problem? Human brains are evolved over thousands of years to solve the environmental problems of find the food supply, avoid the predator, produce offspring….or the goal directedness is a survival instinct shared by earlier organisms. It really is the underlying instinct for thought experiments like the trolley problem. Perhaps the problem involves not doing proper modeling of a computer program for the survival instinct and complex human visual system which can “see” the problem and get the answer more readily for an evolved human game like chess.

  11. 11. zarzuelazen says:

    Hi,

    The theories of Penrose are completely daft. The chess problem is trivial for humans *and* computers. You can see this easily simply by noting that any line of reasoning you can come up with, the *same* line of reasoning can be programmed into a computer.

    Penrose thinks that human intelligence can’t be computational because we seem to be able to bypass Godel’s theorem and make creative mathematical leaps of thought. Since computers are limited by Godel (says Penrose), we can do things computers’ can’t.

    But his arguments don’t hold any water. There’s a far simpler explanation as to how we can beat Godel that doesn’t involve any non-computability.

    The Godel theorems are only limitations on *formal* (deductive) systems. To beat these limitations, we simply use *non-deductive* methods, called induction and abduction. But these non-deductive methods are still computational, so there’s no reason why computers can’t deploy them as well.

    Induction is pattern recognition and probabilistic reasoning. We notice patterns and extrapolate them to conclusions that aren’t certain – instead we use probabilities.

    Abduction is about making inferences to the best explanation based on Occam’s razor and the notion of explanatory coherence – the idea here is that the answers most likely to be true are the simplest one that integrate concepts into a coherent theory. Again, these concluions aren’t certain though.

    So to sum up, the use of induction and abduction is what lets us beat Godel, but the conclusions reached by these methods aren’t certain and are still entirely computational.

    So there is no reason whatsoever for thinking that humans are doing anything non-computational.

  12. 12. Hunt says:

    I have to admit, I’m closer to zarzuelazen than John Searle, but for those who haven’t seen it, here Searle and Flordi making a long and particularly militant case:

    https://www.youtube.com/watch?v=KmEuKkV3Y8c

    Note: For anyone familiar with the argument, you’re not going to learn anything new, so save yourself the two hours.

  13. 13. vicp says:

    We know the answer is 6 or solve it as 1*6 2*3 3*2 6*1 or we know that we want to retrieve an item from the other side of the room without predicting all of the intermediate calculations. Better put nice deductive proofs and calculations are great for math books and chess programs but not how we naturally think.

    Speaking of survival instinct, there is no thinker more infatuated with the survival instinct than Sam Harris whether he admits it or not in his critique of religion. Fascinating listening if you are looking for the survival instinct in his thinking:

    https://youtu.be/p8TDbXO6dkk

  14. 14. zarzuelazen says:

    Exactly right, vicp, we aren’t using deduction most of the time, so Godel limitations don’t apply.

    Even for mathematical insight, deduction isn’t what mathematicians are usually doing, except at the end.

    The great mathematician Gauss had notebooks filled with *numerial* calculations(non-deductive methods). A lot of math is experimental,based on spotting patterns in the results of numerial calculations- so math isn’t really that much different from other sciences , it uses a mixtures of theoretical insights *and* experimental (numerial) data.

    Since the Godel limitation applies only to deduction, it’s not a real limitation on fully general reasoning methods.

  15. 15. Peter says:

    these non-deductive methods are still computational,

    I don’t think so. A formalisation of induction would surely refute Hume (and possibly Turing) and generally be a pretty big deal, wouldn’t it?

    Yes, you can get a human to spot the interesting patterns, or notice the unexpected problem, and then write in procedures to deal with them. But only after the fact.

  16. 16. Michael Murden says:

    Peter, maybe you and Zarzuelazen should get together and hash out what you mean by ‘computation.’ Just out of curiosity do you think what the neural nets you wrote about a few posts ago do is computation?

  17. 17. Jochen says:

    A formalisation of induction would surely refute Hume (and possibly Turing) and generally be a pretty big deal, wouldn’t it?

    In fact, induction can be formalized; and upon doing so, one sees that it’s not computable (so the formalization isn’t effective). This uses algorithmic information theory in order to arrive at a universal probability distribution, which associates a prior probability to any given object; however, that probability is uncomputable (even if one stipulates—as one must for the approach to work—that the world follows computable laws). Nevertheless, one can find approximations, and even prove that in the limit, agents based on this principle (such as Hutter’s AIXI) will perform asymptotically as well as special-purpose problem solvers on arbitrary problems.

    However, that humans take recourse to non-deductive reasoning methods is not an argument against Gödelian strategies, since the way humans derive the truth of the Gödel sentence is manifestly deductive—for a system F with Gödel sentence G, we have (roughly):

    1. F does not prove G (by Gödel’s argument)
    2. F is consistent
    3. G is ‘F does not prove G’

    From these, the conclusion that G is true follows deductively. However, this deduction can’t be carried out within F, since (by the second incompleteness theorem) F does not prove that F is consistent (but if F were not consistent, then it could both prove and not prove G, and hence, G would be wrong, so the premise is necessary).

    Hence, deducing the truth of G requires the ability to reason outside of F; but this violates a key premise of the argument, since it was stipulated that F describes the reasoning capacities of human beings. But if it, in fact, does, then we can’t derive the truth of the second premise, and consequently, can’t establish the truth of G. Hence, the argument is circular: we can derive the truth of a Gödel sentence if we can carry out reasoning outside of the system it’s the Gödel sentence of; but then, the argument simply doesn’t tell us anything about whether human reasoning capacities can be described by means of a formal system.

  18. 18. TonyK says:

    Any decent chess computer would refuse to touch this problem, on the grounds that the board is illegally placed: the bottom left square has to be (logical) black.

    [Probably my fault, the result of the way I reduced the original to monochrome… Peter :(]

  19. 19. zarzuelazen says:

    Penrose is about 20 years behind in his knowledge of the AI field apparently.

    The game of ‘Go’ is massively more complex than chess, and in fact ‘Go’ is the strongest possible test of pattern recognition (induction).

    The leading machine learning methods basically are completely equivalent to an approximation of induction (or probability theory). These entirely computational methods had no trouble at all beating the top Go player Lee Sedol in the famous match with Deepmind last year, and demonstrated super-human pattern recognition abilities.

    The field of mathematics doesn’t progress in the way Penrose thinks. As I mentioned, it’s actually a *combination* of theory AND numerical calculations. For instance, there is no deductive proof of the Goldbach conjecture- the evidence it’s true comes not from deduction, but from simple empirical calculation to test it.

  20. 20. Hunt says:

    Jochen,
    I always meant to ask in similar contexts:

    The interpretation of the text or the machine states aren’t intrinsic, but at least to some degree arbitrary, in contradistinction to mental states, which at least appear to have definite content.

    Apart from the fact that you don’t seem very sure about this, what makes them appear to have definite content?

  21. 21. Peter says:

    @Michael Murden: frankly not sure whether neural nets might involve computational processes subserving others that are not fully computational. What do you think?

  22. 22. Jochen says:

    Hunt:

    Apart from the fact that you don’t seem very sure about this, what makes them appear to have definite content?

    Well, to me, the simplest answer would be that they just *have* definite content—i.e. that my thinking about chess simply is about chess (although it doesn’t work quite that simply on my model). It certainly doesn’t seem to be the case that I could ‘interpret’ my thinking about chess differently; however, it’s the basic characteristic of anything information-based that it can be interpreted in arbitrary ways (a string of bits gives you no hint regarding what it’s about, whether it’s a game of chess or a letter to grandma).

    But then, there’s always someone who pipes up that this is just some sort of illusion, or metacognitive error, or what have you—hence the qualifier (which also serves as a reminder to the various stripes of eliminativists that this appearance is a bit of data their theories need to account for, which they typically don’t seem to do a very good job of).

  23. 23. Hunt says:

    Jochen, Thanks for the reply. I was afraid the question might be interpreted as dumb (well, maybe it is, but probably not much dumber than related questions). Since it seems the appearance of certain content is brushed over too quickly, I thought it might be useful to pause on it for a moment.

    It certainly doesn’t seem to be the case that I could ‘interpret’ my thinking about chess differently

    No, but you could be wrong about chess. For instance, you might not know that castling is a move. Would you then still be playing chess? You might be playing something you definitely intend, but what designates chess (the real one). A set of rules.

  24. 24. Jochen says:

    Hunt:

    No, but you could be wrong about chess. For instance, you might not know that castling is a move.

    Sure. But that doesn’t make the content of my thoughts any less definite; it simply makes me wrong when I call it ‘chess’, when it’s actually ‘chess-without-castling’. In fact, the possibility of such error implies that it is definite: there’s no objective way anyone of us would be wrong if I translate the string ‘01011001’ as ‘Y’, while you translate it as ’89’; but if I think chess doesn’t include the castling rule, then I’m objectively wrong about that.

  25. 25. Hunt says:

    Jochen,
    At least mistakes like this tell us we’re not dealing with something like ideal Platonic forms. The only reason you might be wrong is due to the concrete definition, or by a social convention. A person can live an entire life utterly misinformed about certain things, yet society still functions. Probably the case is true of all things. Perhaps “dog” means something slightly different to me than you. As long as there is enough semantic overlap, the world keeps turning.

    I know the “semantic network” idea has been rehashed before, but it continues to be the case that what something means must (I would say MUST) stand in relation to other things. Indeed, it’s hard to see how it could be otherwise. If my notion of “dog” isn’t determined by memory of dogs, first hand experience with dogs, knowledge of mammals, etc., then what could it be based upon? If I’m wrong about semantic relationships that determine dog, then I’m wrong about dog.

    When we hold in mind “dog” the brain is performing its amazing capacity to turn experience and knowledge into the integrated intention that points to dog. If any eliminativism should be invoked I think it should be here. We don’t hold everything in mind when we intend “dog”. That, I think, is an illusion. Therefore intention seems less mysterious to me than others, but still pretty mysterious.

    However,

    In this light it doesn’t seem so hard to imagine a computer, following the same relationships, formulating a similarly accurate depiction of “dog”, and prone to the same mistakes.

  26. 26. Jochen says:

    Hunt:

    At least mistakes like this tell us we’re not dealing with something like ideal Platonic forms.

    Not that I think we do, in any way, but I’m not sure what your argument is, here—the error is in the fit of mental content and world, not in the content itself: I could view a cylinder side-on and conclude it’s a square, and hence, um, ‘partake’ mentally of platonically ideal squareness, instead of the accurate cylindricality; but that wouldn’t make said squareness any less platonically perfect. (But all in all, I think Platonism falls to the same pitfalls as Cartesian dualism does.)

    However,
    In this light it doesn’t seem so hard to imagine a computer, following the same relationships, formulating a similarly accurate depiction of “dog”, and prone to the same mistakes.

    Yes, this is the reply that always comes up on this issue. I genuinely fail to see why, though: the impossibility of imbuing symbols with inherent meaning doesn’t become any less impossible, just because you have more symbols with whatever ‘relationships’ between them. The problem is exactly the same as trying to decipher the meaning of a page of coded text by adding further pages of coded text—it buys you precisely nothing, simply because the meaning isn’t contained in the coded text, but rather, in the combination of the coded text and its intended interpretation. Without it, there simply is no meaning there; and it’s exactly the same with computers.

    You can establish the meaning of some symbols via their relationships exactly if you can design a ‘universal language’ which is decipherable for anybody without any prior knowledge. But that’s clearly impossible.

    And mathematically speaking, the only information contained within a set of tokens and their relationships is just the number of tokens: the relationships between them simply tell you nothing about the meaning of the tokens.

  27. 27. Hunt says:

    Jochen,
    The symbols themselves don’t do anything. A program manipulating symbols does “do” something. You are going to respond that what the program does is open to interpretation. That becomes less and less tenable with complexity, but you’re going to say it becomes more and more tenable. This is a point of disagreement. (I just spared us two days of back and forth).

    I would say that the ambiguity of what a society of machines is up to is roughly comparable with the ambiguity of whether we’re playing real chess or chess w/o castling. At least there is a practical/nontrivial argument here.

  28. 28. Jochen says:

    Hunt:

    A program manipulating symbols does “do” something. You are going to respond that what the program does is open to interpretation.

    Well, it’s not quite right that it’s open to interpretation, rather than that it needs interpretation to actually do anything (besides just following the laws of physics in its evolution). So even if the interpretation were unique, that doesn’t mean no interpretation is needed; hence, even if additional complexity helped to narrow down possible interpretations (which is just false, mathematically), this wouldn’t touch on the basic problem.

    Even if the only possible interpretation of the symbols (physical machine states) ‘2’, ‘3’, ‘*’ and ‘6’ were as the corresponding numbers and the operation of multiplication, it would still have to be so interpreted in order for a machine to actually perform that operation; because without the interpretation, it simply follows a succession of physical states, none of which are the numbers 2, 3, or 6. Without the interpretation, what a computer does is just the same as what a stone does rolling down a hill; just as, without interpretation, symbols simply don’t refer to anything.

  29. 29. Hunt says:

    Even if the only possible interpretation of the symbols (physical machine states) ‘2’, ‘3’, ‘*’ and ‘6’ were as the corresponding numbers and the operation of multiplication, it would still have to be so interpreted in order for a machine to actually perform that operation;

    If that calculation were done in service to some other process, say figuring out somebody’s taxes, would you say the operation is as meaningless as if it’s done alone?

  30. 30. Jochen says:

    Hunt:

    If that calculation were done in service to some other process, say figuring out somebody’s taxes, would you say the operation is as meaningless as if it’s done alone?

    Yes, of course. But I think I see what you’re driving at: the elements of the world are not open to interpretation—a tax form is a tax form, a dog is a dog, and so on; so the computation going on inside a machine filling out tax forms, or in a robot walking the dog, can be rooted in this real-world definiteness, sort of ‘inheriting’ it for themselves. It doesn’t seem to make sense to describe a dog-walking robot as carrying out a computation corresponding to a chess program.

    But the problem with such stories is that they fallaciously mix two levels of description: the computation allegedly going on within the robot mind, and the already-interpreted elements of the real world. Here, I say ‘already interpreted’ because dogs, tax forms, trees, clouds and so on are not simply elements out there in the world; they are aspects of the way we conceptually coordinatize that world. The story of the robot walking the dog is told from the perspective of somebody to whom things like ‘dog’, ‘robot’, and so on, have conceptual meaning—to whom these things are already interpreted. It’s this interpreter’s perspective that grounds what apparent meaning there is within the computation performed by the robot.

    Eliminating all the conceptual access to the elements of the world from the tale makes it obvious that this does not provide a way to ground the meaning of the computation, since it already depends on grounded meaning. All of the meaning in the tale simply comes from the perspective of its narrator, a perspective which we can’t help but assume, without noticing we do so—which is what makes the problem so pernicious.

    Alternatively, to make a consistent tale, you could try simply describing it at a fully physical level: certain patterns of sensory data excite certain circuits, which excite other circuits, which drive motors and actuators; but at this level, you have no need to talk about computation or mind at all, and have nevertheless completely explained the robot’s performance.

    So, you can tell the tale at either of two levels: in intentional terms, with explicit recourse to concepts, reference, and meaning—in which case, it’s no wonder that the robot’s computation appears to mean something, but only because there is meaning in the narrative from the beginning—or in basic physical terms (note that I don’t intend here to make it seem as if intentional terms were non-physical; I’m talking about different modes of description, not different modes of being), in which case nothing like meaning etc. will be at all necessary for the tale.

    But what you can’t do—and what is the basic switcheroo behind such attempts to ground meaning in the alleged ‘real world’—is mix these two levels of description in order to make it seem as if the conceptual elements just ‘bleed over’ into the robot’s mind, because then, you’re simply engaging in circular reasoning. This is surprisingly hard to do, and the error is difficult to spot, simply because conceiving of things in terms of reference and meaning is just so natural to the human mind; but I think overcoming this error is essential to start and formulate a true theory of the mind.

    One thing that helps is to try and imagine the tale from the point of view of another robot, to whom the dog, the tax form, clouds and whatever else are themselves just particular patterns of bits; since those need to be interpreted in order to refer to anything themselves, they don’t help with the interpretation of the computation that the tax form filling robot performs (on pain of incurring a vicious regress).

  31. 31. Hunt says:

    This is surprisingly hard to do, and the error is difficult to spot, simply because conceiving of things in terms of reference and meaning is just so natural to the human mind; but I think overcoming this error is essential to start and formulate a true theory of the mind.

    The question is what do you have left after throwing out a generalized system of symbol processing. If we go digging into the brain we find neurons, synapses, neurotransmitters, etc. This is hand-waiving, but in general terms these are 21st century versions of Leibniz’s mill. There is no more semantic meaning in a neurotransmitter molecule than there is a string of bits, or a gear on an axle; yet the brain does seem to accomplish the task.

    I’m not going to touch the Hard Problem, and I would be quite satisfied with a mere simulation of how the brain integrates information. In fact, I would be satisfied with Google results that appeared to grasp the meaning of my queries. If these would still be meaningless, perhaps I can tempt you to consider that they may still capture, roughly, brain activity.

    If I say to you “A Disney mouse with large ears” and the name “Mickey Mouse” comes to your mind, the process by which the name is accessed might ultimately be rote symbol processing. Speculatively, if that process were replaced with an interface to a computer, the result would have no meaning until you interpreted it. However, elucidation of the process would be no less pertinent to the operation of the mind. It’s for reasons like this that I object to Searle’s objection to AI.

    And of course this is but one example. Most of the operations of our brains is unconscious, observer/interpretation independent. I hate to say, but Searle seems to obsess on the Cartesian Theater. Not that I think this is wrong. I think “the observer” is a profound mystery, as is the whole of phenomenology. It’s just not all we should be paying attention to.

  32. 32. Hunt says:

    I’m not sure if you slogged through the talk I linked in 12 (you probably don’t want to), but Searle would no doubt speak of “the brain’s biochemical processes,” as if there’s a bit more magic in that than talking about gears and cogs. There’s no magic in chemistry (barring a fundamental discovery, Cf. Penrose).

  33. 33. Jochen says:

    Hunt:

    There is no more semantic meaning in a neurotransmitter molecule than there is a string of bits, or a gear on an axle; yet the brain does seem to accomplish the task.

    Yes, because there is a crucial difference between neurotransmitters, gears and axles on the one hand, and bit strings on the other: the former don’t depend on interpretation; they simply are just what they are. Although it’s not really right to say that they are, e.g., gears and axles—gears and axles aren’t part of the world, but merely, part of how we coordinatize the world using concepts; different coordinatizations are possible where one wouldn’t find any talk of ‘gears’ and ‘axles’, but which nevertheless describe the same physical matters of fact.

    In contrast, a bit string is just a pattern of differences within some medium—say, a pattern of black and white marbles. The marbles, once again, just are what they are, but what the bit string they realize means is something dependent on the observer; consequently, bit strings can simply never suffice to underwrite an explanation of the observer themselves, since without an observer, there’s really just the pattern of black and white marbles. Information isn’t a fundamental quantity of the world, because it’s only something that arises due to the presence of minds; no mind, no information. Likewise, there is no computation out there in the world without a mind interpreting the behavior of a certain physical system as standing for something else.

    Basically, computers are simply something like universal modeling clay. Take, for instance, a small-scale model of the Venus de Milo: you can use this as a representation of the real thing, in order to, say, take measures of the relation between the distance of the eyes to the width of the mouth—if it is a faithful model (i.e. if the modelling relationship holds for the domain you’re interested in studying), this gives you information about that relationship for the real thing. But in order to do so, you need to interpret that small clay figurine as standing for the real Venus: in the absence of that interpretation—say, if you don’t know the original, and only have access to the ‘model’—it doesn’t tell you anything about the original; it simply is what it is, just as the original is what it is.

    This relationship obtains between patterns of information (or more accurately, patterns of differences) and what we choose them to represent. A computer is special in the regard that it can be interpreted as basically anything (as long as it has enough states to bear the required structure). That makes them useful; it also makes immediately clear that something like explaining the mind on the basis of computation, or ‘simulating’ the world, are just misunderstandings: just as the small clay model of the Venus de Milo isn’t itself the Venus de Milo, but just a different small clay figurine that may be interpreted as standing for the Venus de Milo, so is the pattern of electronic excitations within a computer not a world, or a mind, but something that may be interpreted as standing for a world, or a mind; but in itself, it’s simply a pattern of electronic excitations.

    Hence, it’s from such things—patterns of excitations, gears and axles, neurotransmitters, and so on, i.e. from things as they really are in themselves—that one must build minds; not from any alleged computation they might perform, since as soon as we say ‘x computes’, we say ‘x is being interpreted as (something)’, which is inevitably circular. A way must be found to make these things meaningful to themselves, such as to eliminate the dependence on outside observers; that’s what I try to do with the von Neumann construction.

    In fact, I would be satisfied with Google results that appeared to grasp the meaning of my queries.

    The problem is that there is simply no such thing as a ‘Google result’ out there in the world, without you interpreting a particular pattern of lit pixels on a screen as ‘Google results’. Computational entities are not part of the world; they’re part of the way we conceptualize the world. Unless this distinction is properly made and scrupulously upheld, no theory of the mind can ever get off the ground.

    There’s no magic in chemistry (barring a fundamental discovery, Cf. Penrose).

    Agreed: there is no magic in chemistry (or gears and cogs, or neurotransmitters). But there is mind-independent reality, there is definiteness, which is exactly what computation lacks. And that’s all that’s needed.

Leave a Reply