Go boardThe recent victory scored by the AlphaGo computer system over a professional Go player might be more important than it seems.

At first sight it seems like another milestone on a pretty well-mapped road; significant but not unexpected. We’ve been watching games gradually yield to computers for many years; chess notoriously, was one they once said was permanently out of the reach of the machines. All right, Go is a little bit special. It’s an extremely elegant game; from some of the simplest equipment and rules imaginable it produces a strategic challenge of mind-bending complexity, and one whose combinatorial vastness seems to laugh scornfully at Moore’s Law – maybe you should come back when you’ve got quantum computing, dude! But we always knew that that kind of confidence rested on shaky foundations; maybe Go is in some sense the final challenge, but sensible people were always betting on its being cracked one day.

The thing is, Go has not been beaten in quite the same way as chess. At one time it seemed to be an interesting question as to whether chess would be beaten by intelligence – a really good algorithm that sort of embodied some real understanding of chess – or by brute force; computers that were so fast and so powerful they could analyse chess positions exhaustively. That was a bit of an oversimplification, but I think it’s fair to say that in the end brute force was the major factor. Computers can play chess well, but they do it by exploiting their own strengths, not by doing it through human-style understanding. In a way that is disappointing because it means the successful systems don’t really tell us anything new.

Go, by contrast, has apparently been cracked by deep learning, the technique that seems to be entering a kind of high summer of success. Oversimplifying again, we could say that the history of AI has seen a contest between two tribes; those who simply want to write programs that do what’s needed, and those that want the computer to work it out for itself, maybe using networks and reinforcement methods that broadly resemble the things the human brain seems to do. Neither side, frankly, has altogether delivered on its promises and what we might loosely call the machine learning people have faced accusations that even when their systems work, we don’t know how and so can’t consider them reliable.

What seems to have happened recently is that we have got better at deploying several different approaches effectively in concert. In the past people have sometimes tried to play golf with only one club, essentially using a single kind of algorithm which was good at one kind of task. The new Go system, by contrast, uses five different components carefully chosen for the task they were to perform; and instead of having good habits derived from the practice and insights of human Go masters built in, it learns for itself, playing through thousands of games.

This approach takes things up to a new level of sophistication and clearly it is yielding remarkable success; but it’s also doing it in a way which I think is vastly more interesting and promising than anything done by Deep Thought or Watson. Let’s not exaggerate here, but this kind of machine learning looks just a bit more like actual thought. Claims are being made that it could one day yield consciousness; usually, if we’re honest, claims like that on behalf of some new system or approach can be dismissed because on examination the approach is just palpably not the kind of thing that could ever deliver human-style cognition; I don’t say deep learning is the answer, but for once, I don’t think it can be dismissed.

Demis Hassabis, who led the successful Google DeepMind project, is happy to take an optimistic view; in fact he suggests that the best way to solve the deep problems of physics and life may be to build a deep-thinking machine clever enough to solve them for us (where have I heard that idea before?). The snag with that is that old objection; the computer may be able to solve the problems, but we won’t know how and may not be able to validate its findings. In the modern world science is ultimately validated in the agora; rival ideas argue it out and the ones with the best evidence wins the day. There are already some emergent problems, with proofs achieved by an exhaustive consideration of cases by computation that no human brain can ever properly validate.

More nightmarish still the computer might go on to understand things we’re not capable of understanding. Or seem to: how could we be sure?

58 Comments

  1. 1. antianticamper says:

    Very nice summary.

    To expand on one of your main points, perhaps the most interesting aspect of AlphaGo is the combination of “new AI”, i.e. machine learning in the form of deep reinforcement learning, with “old AI”, i.e. a logic-based approach in the form of Monte Carlo Tree Search. Although it should be mentioned that this representative of “old AI” is only about 10 years old.

  2. 2. Witness: 07 February 2016 – Sakeel says:

    […] In the world of games and AI, recent news about a new Go AI capable of beating humans. Conscious Entities offers a summary. […]

  3. 3. Sci says:

    Interesting stuff, especially as there seems to be increased investment in management-level AI from what I recall.

    Of course AI doesn’t need to be a conscious-entity to replace most humans at their jobs. I’ve never really bought the arguments that more jobs will be created but if the current crop of children are lucky I’m wrong.

  4. 4. Hunt says:

    To answer the rhetorical question, from the “Singularitarians” or “Intelligence Explosion” – ists, some of whose reasoning I do respect. The ability to self reflect, or introspect, is an interesting line of inquiry for future AI efforts. An intelligence that can’t explain itself is less useful to society than one that can, just as with a person. To be clear, it’s not useless thing. If an oracle suddenly appears spouting the secrets of the universe, I’m pretty sure nobody is going to tell it mind its own business.
    Notably, self reflection is an aspect of human intelligence, but not the defining factor. Chess masters often describe “just knowing” the next move. In other words, the reasoning behind it (and at some level, it’s got to be there) is not accessible to them. However, less expert players can recite the litany of reasons for their move, correct or not.
    I doubt machine intelligence will ever become totally beyond human understanding, for the reason that every concept can be “dumbed down” to almost any level. “You don’t really understand anything unless you can explain it to your grandmother,” Einstein supposedly said. But sure, perhaps there are levels of abstraction “out there” that are just beyond us, as for instance, you are never going to explain relativity to a dog, no matter how adept at tricks he happens to be. Still, it seems our intelligence is general purpose enough that just about anything might be explained to us in a hand-wavy way. Perhaps once we achieve a certain level of meta-level thought, by induction every concept is available to us. These are things we just don’t know at this point.

  5. 5. Sci says:

    @ Hunt: I kind of agree. I’ve never quite understood how the God-like AIs trick humanity into surrendering control.

    I know some people mention Roko’s Basilisk but this kind of thing seems to be problem only if you’ve really drunk Less Wrong’s Koolaid.

  6. 6. Hunt says:

    @Sci,
    I think the idea is that we willingly give them control for the fruits of knowledge and benefit they can bestow, thus the imperative to make them “friendly”.

    But first I’ve heard of Roko’s Basilisk. I hope you’re proud that I’m now doomed. 🙂

  7. 7. Hunt says:

    I should clarify that I don’t hold that we’re the smartest things that ever will be, just that perhaps there will never be an intelligence so far beyond us that we can’t relate to it at some intellectual level, as dogs cannot really relate to humans. I had an online exchange with a person who did hold this belief, that we are the terminus of intellect, which seems obviously wrong to me. We’re able to hold from five to ten variables in mind, depending on aptitude, but imagine an intelligence that could work with a hundred or a thousand or a million. What would it notice about the world that is invisible to us? But would this only be a quantitative difference, like a dog with more smell receptors, or a bat with a larger echolocation center, and not a qualitative difference? To answer the question would be to transcend our limitation. It seems the nature of the question is unanswerable. Even if we met such intelligence we might not recognize it beyond some incomprehensible process of nature. Or, more likely perhaps, we would relate to it as dogs do to humans, acting at a human level but always knowing it thought things that were beyond us.

  8. 8. Jochen says:

    Hunt, I think what you’re saying can be made somewhat more precise: we are universal intelligences in the sense that we can carry out any symbolic operation—i.e. explicitly simulate a universal Turing machine (if supplied with an unlimited resource, say, pens and paper). Hence, if there are only computable processes in nature, we can recapitulate all natural processes—i.e. reason us through them step by step.

    Universality occurs when systems (in this case, for instance, our language capacities—‘language’ here understood in the sense of all of our symbolic manipulations, like mathematics, or formal languages, etc.) cross a certain threshold of complexity—all systems beyond this threshold are able to emulate one another. Dogs and other animals would then be below this threshold (at least concerning their language’s expressiveness), hence, we are at a qualitatively different level of reasoning—a dog could indeed never learn calculus, but there is nothing such that it is forever beyond our grasp (which, for instance, has implications on those kinds of eliminativism that suppose that our reasoning capacities are fundamentally limited in some way, unable to fully account for themselves, say, which accounts for why they appear mysterious to themselves).

    Of course, universal machines are really just an idealization—every human being, as a finite entity, will eventually run out of resources. But the process of finding answers isn’t limited to a single human being, ever since we’ve started making notches into bones and sticks to aid our reasoning capacity some 30,000 years ago. Also, while universal machines are all equivalent in the kinds of task they perform, they may differ vastly in their efficiency—your desktop PC won’t complete certain tasks in a useful amount of time, while a supercomputer may whizz through them in seconds. So, we’re not the end-all and be-all in terms of intelligence, but any greater intelligence will only outstrip ours quantitatively, not qualitatively.

    This has at least two interesting (well, to me) implications for the philosophy of mind. One is with respect to free will: a universal system’s actions can in general only be predicted by explicit simulation—there are no computational shortcuts. So, in this sense, every explanation for why you make a certain decision must include a full simulation of you making that decision—it’s not reducible to anything else; hence, you (or some simulated equivalent) making that decision is a necessary cause of you acting a certain way, thus opening the door to a kind of compatibilism.

    Furthermore, it may not be the case that all processes in nature actually are computable. A prime candidate would be the generation of (genuine) randomness by quantum-mechanical phenomena, since no computable process can produce genuine randomness. If that’s the case, then there are non-algorithmic resources out there in the world; and, if supplied with such a non-algorithmic resource, even an ordinary computer can compute functions beyond the capacities of any Turing machine. If this is the case, then it might be that our (algorithmic) capacities of explanation fail to account for the (non-algorithmic) performance of our brains, augmented by environmental randomness. Thus, we’d have a necessary ‘explanatory gap’ when trying to account for the whole of the phenomena our brains give rise to—such as, e.g., consciousness.

    This deals swiftly with a lot of the anti-physicalist arguments: Mary genuinely learns something new, because her (algorithmic) knowledge of the visual system fails to account for the (non-algorithmic) processes that occur when actually exposed to a given stimulus; zombies are conceivable only because we can’t derive the facts of conscious experience from the physical facts, since doing so would require us to perform deductions not available to any computational system, but they’re not metaphysically possible; and so on. It’s not that the facts about consciousness outstrip the physical facts, but merely that our ability to deduce them is limited by our computational reasoning capacities.

    Anyway, it’s at least an option I find interesting to pursue…

  9. 9. john davey says:

    Physically it makes no difference what the algorithm is. A computer’s just a chip with a micro-instruction set. It executes those very primitive instructions (add 1 to 0, move the contents to 0x00ffa9cc in memory etc) in total oblivious ignorance to the start or end point of any higher level information process.

    The ‘complexity’ of the algorithm has no bearing whatsoever outside of the software author’s or users’ context : it is totally irrelevant to a microchip, for whom a process that counts to ten has exactly the same complexity as a process that beats Garry Kasparov at chess.

    In fact when referring to actual computer code it is utterly meaningless to talk of complex software. From examination of machine code it is completely impossible to determine the purpose or goal of software, hence impossible to determine if that purpose is intricate or otherwise. It is only meaningful to talk of complex outcomes or goals of software, solely from the user’s or author’s functional perspective.

    In this case, “AI” (otherwise known as “computer programming”) has got round the issue of combinatorial explosion by trimmed algorithms, in the same way as was done for chess. In chess, as computers got more powerful, the role played by brute force mechanisms has simply increased as time has gone on. And they’ve got more succesfull. The same will happen in Go, as the algorithms get better and more tuned for ‘Go’.

    In any case, ‘strategic’ algorithms still really on predictions of developments from ‘brute-force’ calculations. To split the two so neatly is another fiction from the AI people or “computer programmers” as I call them

  10. 10. Hunt says:

    Jochen,
    I knew things might be expressed in terms of computability, but you did a much better job than I could! Of course, there is a difference between stepwise reasoning and understanding. (You’re not going to get a dog to do stepwise reasoning either.) Here is one key aspect to the Chinese room paradox, if you consider it that. Or, as Peter mentions, there are the exhaustive computational proofs that nobody can hold in mind, i.e. understand. To actually do anything with a result, you have to understand it, that is, hold it in mind.

    There is the repeated mention in the linked video about AlphaGo and in the comments above of intuitive “just knowing”. To quote myself (how arrogant!):

    “Chess masters often describe ‘just knowing’ the next move. In other words, the reasoning behind it (and at some level, it’s got to be there) is not accessible to them.”

    On second thought, how true is this? To what extent can intuition be called reasoning? It certainly isn’t symbolic reasoning. A dog probably doesn’t do a lot of reasoning, or any symbolic reasoning. “Just knowing” probably dominates its cognitive world.

  11. 11. Hunt says:

    john davey,
    You could make the same argument about the firing of neurons. Whether a neuron fires in service of abstract thought or for a reflex reaction has no significance in the context of individual neurons. In the context of individual neurons, or coding instructions, complexity has no meaning. But nobody ever talks of complexity in those terms. Complexity is always tied to the goals or results (or “process”?) you mention.

    According to the linked article and video, AlphaGo does rely somewhat on brute force lookahead, but also on neural network pattern recognition. I’m still a bit fuzzy on just what “deep learning” denotes, whether it has more to do with search or pattern matching, or a combination.

  12. 12. Sci says:

    @ Hunt: “You could make the same argument about the firing of neurons.”

    Doesn’t that leave us with a dichotomy of either a flavor immaterialism or eliminativism?

  13. 13. Sci says:

    @ Jochen: How does adding randomness make compatibilism more or less viable? I don’t accept it as anything but a kind of “let’s all pretend the emperor has clothes” situation.

    OTOH as I’ve said in the past I’m not convinced we really have a good account of causality but that’s a whole discussion outside of consciousness. 🙂

  14. 14. Hunt says:

    Sci,
    It’s definitely a materialist position. Materialism is always eliminativism in some ways, IMO. I think its the opposite to “flavor materialism” as I understand it. The analogy is between computer code and neurons; you don’t know by looking at a computer instruction whether it’s in the microcontroller of a toaster oven or in AlphaGo–and you don’t know from looking at a neuron whether it’s involved in a burn reflex or thinking about materialism. (With the caveat that maybe this is false. I’m not a neuroscientist.)

  15. 15. john davey says:

    “You could make the same argument about the firing of neurons.”

    Actually I don’t think you can, for two separate reasons.

    Firstly, on a more abstract level, no-one knows about how the brain works so anyanalogy between computational states and (thus far discovered) modes of activity of neural cells is simply not justified by available science.
    And that’s pretty much where the discussion should stop.

    However ..

    There is some detailed electrochemical knowledge at a certain level of the brain’s operations – but that’s about it. Compare and contrast with computers, which are functional, logical machines about which there is no ignorance whatsoever. What computers do is fully known – by design.

    Each state in a state machine bears no necessary relationship with any other state. There is no necessary connection between one line of a computer program and the next : computer states are incoherent and systematically disconnected. It makes no sense to talk of “complexity” when the underlying states of the system are incoherent by design – other than in the strictly subjective, observer-relative sense of the objectives of software author or the user.

    Any neuron however is connected physically to any other neuron in the same brain via a contiguous physical medium which makes it more likely that not ( i think it not unreasonable to say) that neuron firings are coherent and more likely to be capable of creating inherent (as opposed to observer-relative) complexity. ‘Feedback loops’ in brains could add inherent complexity, unlike computer programs where they ‘add’ nothing.

    Separate neuron firings have an inherent physical connection : state machine states are incoherent and disconnected. Neurons are connected via natural media, uncontrolled or constrained by the requirements of systematic human thinking, and having the causal powers of nature : the sequences of state machine states are determined by mathematical models, with no causal powers at all.

    Brain activity can be inherently complex and capable of change : the inherent complexity of a computer never actually alters under any circumstances.

    In short I think it’s a weak analogy.

  16. 16. howard berman says:

    I just follow your discussions out of curiosity. My knowledge is that of a layman.
    We don’t have full grasp of how neurons work. We have complete knowledge of how computers work or ‘think’ which is mechanistically. We know the laws of conditioned and operant learning maybe not down to the most granular level but at a higher level manifesting the inner workings of neurons. Both of these learning, or ‘thinking’ functions of the organism’s nervous system are mechanistic, so when people, somebody has compared behaviorist mechanisms of learning with the functioning of a computer, they must have found something, however much I wonder it was a dead end.
    I mean, did Chomsky’s takedown of Skinner ruin for forever such analogies?

  17. 17. Sci says:

    @ Hunt: I’d agree all materialism is eliminativism. IMO if computationalist arguments are correct (I have doubts), it shows thought as an actuality is not required rather than computers are thinking.

    @ Howard: As a fellow layman I do think you’re possibly on to something? Bruce Goldberg has a paper, “Are Human Beings Mechanisms?”, that takes a similar line of criticism (you should be able to sign up for a free trial and read it):

    https://www.pdcnet.org/pdc/bvdb.nsf/purchase?openform&fp=idstudies&id=idstudies_1999_0029_0003_0139_0152

    He extends the criticisms of behaviorism to a general mechanistic conception.

  18. 18. Hunt says:

    john davey,

    “And that’s pretty much where the discussion should stop.”

    “In short I think it’s a weak analogy.”

    I think the analogy is weak as well, but still useful. Yes, if you think about it the analogy is terrible. Computer instructions are discrete computing operations and neurons are interconnected biological cells. Really the only useful thing is the reductionist point that computer instructions and neurons are so elemental that it seems unbelievable that complexity arises from their operation. And how that happens (if it does, alone) in the case of neurons is not understood, as you say.

    Moving on to whether computers can actually be complex: The states of a state machine are related by the definition of the machine itself, as provided by, in one type of abstraction, by a computer program. If states had no relationship, then any state could follow any other, which isn’t the case in general state machines. True, any computer instruction can follow any other in a program, provided you don’t mind the incoherence (bugs) of your program. Of course, what is usually meant by “a program” is actually a list of instructions in conjunction with memory and secondary storage, as well as input and output. Programs can also read their own code and rewrite themselves. When talking about limitations, it’s important not to give computation short shrift.

    It makes no sense to talk of “complexity” when the underlying states of the system are incoherent by design – other than in the strictly subjective, observer-relative sense of the objectives of software author or the user.

    Now this seems to be veering into Jochen territory, and I’ll be curious to see what he might have to say, if he returns to the discussion. He may object on the basis of “intentional automata.” If I’m correct, what you’re speaking of here is “underdetermination,” which I admit, I have not thought through. Not saying the argument is specious, just that I haven’t formed an opinion yet.

  19. 19. john davey says:

    “The states of a state machine are related by the definition of the machine itself, as provided by, in one type of abstraction, by a computer program.”

    They can be – they don’t have to be, which is what matters. The link between one state and another is still arbitrary, because a whim of an engine designer does not create what could reasonably de described as a ‘naturally necessary’ link.

    A digital computer program is a series of 0’s and 1’s that change with each step. I can’t think of a reason why any one sequence of 0s and 1s need be followed by any other, unless it be by the whim of a CPU designer who insists that it be so. In practice they usually don’t – block structured programming restricts the range of possible sequences. But the important point is that constraint is not inevitable.

    “When talking about limitations, it’s important not to give computation short shrift. “

    You prove my point that we can forget the idea of complexity – or that perhaps my point wasn’t made so well. Programs “rewriting themselves”, “input”,”output” , “secondary storage” , all these terms are – every single one of them – observer relative terms. They are defined from the perspective of the user. A CPU adds numbers, and moves the contents of shift registers from one memory address to another. That’s it. It gets no more complex than that, ever.

    Functionally of course it gets more complex, from the user or author’s view. We can talk of “one program rewriting itself” as it’s simpler than referring to “numerous separate instantiated instances of a program X, written by human being Mr Y, that reads the file from location Z , modifies it according to rules determined by Mr Y, then writes the contents of that file (as determined by Mr Y) back to the orginal location Z”.

    It’s Mr Y who decides the complexity. Mr Y who authors the process. Look at the machine code and what is it doing ? Moving the contents of one shift register to another. The same as if it had been printing “Hello World”.

    “If I’m correct, what you’re speaking of here is “underdetermination,””

    I think you’re probably not, as I don’t see what underdetermination has to do with it. I am talking about computers remember – not brains. There are no ambiguities, scientifically, in any way whatsoever with computers. No theories, no ambiguities, no dual possibilities. We know what they do all the time – by the design of them.

  20. 20. Jochen says:

    Hunt #10:

    Of course, there is a difference between stepwise reasoning and understanding. (You’re not going to get a dog to do stepwise reasoning either.) Here is one key aspect to the Chinese room paradox, if you consider it that. Or, as Peter mentions, there are the exhaustive computational proofs that nobody can hold in mind, i.e. understand.

    Hmm, that’s a good point. But is it necessary to hold all the steps in the mind in order to understand the whole? I know there are proofs that I have worked through step-by-step, and in the end, I certainly felt I understood the result—but I’m not sure that I ever held all of the steps in mind as a whole. It’s like going through a maze: at any given point, only some portion of the way is visible; but that doesn’t preclude you from getting to the end, and you don’t need to have the whole way within your purview to do so. This does entail some blind turns and backtracking, maybe, but once you reached the end, you reached the end. Even with a map, you typically only look at the area you’re at right now, with only enough notion of the complete journey as to help you decide which turn to take next.

    So take one of those exhaustive proofs—the most famous one being that of the four-color theorem. Neither I, nor any other human mathematician, could hold all of it within their mind simultaneously—but I understand it (well, with some study, perhaps) in the sense that I know exactly what one has to do to arrive at the result; there is no central mystery in there anymore. Likewise, actually, with complicated proofs that human mathematicians have come up with (I’m never going to be able to follow Andrew Wiles’ proof of Fermat’s theorem in all its gory details, for instance). But likewise, there is no remaining ‘explanatory gap’—no qualitative, just a quantitative difference. Something that no single human being could come up with (well, in the exhaustiveness case, anyway), but nevertheless something that humanity as a whole, suitably extended with additional resources, can find out.

    Does this make sense to you? (And incidentally, I don’t think I got your point re the Chinese Room; could you try and clarify?)

    Sci:

    How does adding randomness make compatibilism more or less viable? I don’t accept it as anything but a kind of “let’s all pretend the emperor has clothes” situation.

    No, randomness doesn’t do a thing here. It doesn’t make a decision any more free; it simply makes it random. What I have in mind is that a decision is genuinely yours, due to you, originates from you, whatever, if there is no set of causes the decision can be reduced to that does not include you, in some fashion. So, take the question of whether a given (universal) computer ever reaches one certain configuration or another, and based on the answer to that question, have a ham sandwich, as opposed to a tuna melt. There is no shortcut to making that decision but to explicitly simulate that computer’s evolution, up to where it reaches either configuration; thus, you can’t get rid of the computer in that case. All stories that explain how that decision was made have to include that computer’s evolution in some form. Moreover, there is ultimately no more reason to why that computer reached this configuration other than that it simply did (otherwise, we could use that reason in order to answer whether it will reach that configuration, which entails being able to solve the halting problem).

    Similarly, if we are universal systems, then all stories that explain why we made a certain choice will in the end contain us making that choice as a brute fact; there is nothing else it could be reduced to. So no story exists in which it’s not the making of that choice that ultimately determines what was chosen.

    This is not a libertarian notion of free will, not a ‘could have done otherwise’; but it ensures that our choosing is a genuinely effective notion in explaining our actions, which to me seems to be about as free as you can get.

    Hunt #18:

    Now this seems to be veering into Jochen territory, and I’ll be curious to see what he might have to say, if he returns to the discussion. He may object on the basis of “intentional automata.”

    Actually, I don’t think I want to object. I agree with a good portion of what john davey’s been saying, in particular, that all the notions invested into a given computation are ultimately observer-relative. Without some observer singling out an interpretation, what is being computed is simply arbitrary.

    It’s for this reason that I postulated my intentional automata as depending not on computational notions, but on physically real pattern, and actions taken by such patterns on other patterns (or themselves). It’s not that these patterns somehow instantiate some computation, but that they have a given behavior either in the cellular automaton stand-in world of my toy model, or in the real world of neuronal excitations where I hope the model, suitably generalized, will ultimately work.

  21. 21. Hunt says:

    Jochen (and john davey),
    The point about stepwise reasoning and Chinese room is about local vs. global understanding, or as I think Searle meant, symbolic vs. semantic processing. The person in the room reading instructions and shifting symbols is like the processor manipulating symbols, or a person verifying a proof (which is akin to symbol manipulation). This extends to the actions of neurons, which can be thought of as mechanistic symbol processors, lacking understanding. Yet, like the Chinese room, in aggregate, they “understand” semantics.

  22. 22. Jochen says:

    Hunt, I think Searle would disagree with you—he doesn’t believe the ‘systems reply’ to the Chinese Room works. I think it does—if an algorithm could understand semantics, then the room would, if considered as a whole (and if one imagines, as Searle does by way of reply, that the whole thing be ‘internalized’, so to speak, then one would simply have one brain instantiating two minds—if you ask it in Chinese what its favorite color is, its answer might well be different from when it is asked in English, for example).

    But I don’t believe algorithmic manipulations suffice for understanding, again since computational notions already are observer dependent, and thus, we can’t use them to ground the observer’s understanding. To me, the Chinese Room is then basically a red herring, when it comes to the question of whether ‘computers can think’.

    I’m also not sure how local/global understanding cuts along the same line as syntactic/semantic processing. Even if I understand just one step of a process, I still do so semantically (even though the step may be carried out by formal manipulation). And even though I understand the whole semantically, it is still just as much a collection of formal manipulations as each step is. Thus, to me, it seems that semantic understanding is really something additional to the syntactic steps carried out to arrive at a particular understanding (and of course, if my model is right, that something extra is provided by the von Nemann process).

  23. 23. Hunt says:

    Jochen,
    Local vs. global understanding might be a bit of hand waving deception on my part. It really depends on what role one happens to be playing. The brilliance of the Chinese room is Searle put a human in it, not a machine. A machine being there would have underscored the rote formalism. Instead, putting a human there having only local understanding of symbol manipulation suggested something deeper. I agree it’s largely red herring, and I agree with you (more or less), that the systems reply seems plausible, but in the systems reply, local understanding equates to mere symbol manipulation, the person is acting as a simple machine.

    I say “mere” symbol manipulation, but of course symbolic manipulation is what distinguishes us from dogs and is the reason why scientists thought AI was just around the corner for decades. Computers have leapfrogged most natural forms of cognition up to the highest level. Unfortunately, there’s nothing much backing it up; it’s just a facade of symbol manipulation.

    And even though I understand the whole semantically, it is still just as much a collection of formal manipulations as each step is.

    At a higher level, yes. It’s still something of a mystery just what people mean when they say they understand something, particularly when you realize that they’ve often forgotten the fundamental steps needed to derive it, or even have in mind many of the things it stands in relation to–yet how can a person understand something without the sustaining references to what it means? Very strange. It’s like pulling the table out from under a floating body.

  24. 24. john davey says:

    “This extends to the actions of neurons, which can be thought of as mechanistic symbol processors”

    Do you know something that the rest of the world doesn’t ? Some science you’ve stumbled across that no-one else has ?

    Then explain why neurons should be thought of as “mechanistic symbol processors”. It’s not interesting what they could be thought of – they could be thought of as pigeons (the traditional medieval imagery of communication systems) or telegraphic endpoints(a common Victorian analogy). They could be thought of as quarks. Or bits of cheese that mice haven’t eaten yet. Anything.

    There is a remorseless confusion in these debates between stuff and models of stuff. A model of a neuron is incapable of doing anything more than a symbol processor – that doesn’t mean it is a symbol processor. That means you are a symbol processor. It’s not possible in the world of physics to use anything other than symbol processors as models. But let’s not confuse physics with reality.

    There are no causal constraints on the object of a model. So a neuron can do things that models of neurons can’t – they can occupy space, possess mass, have causal powers. Neurons – or rather collections of them – can understand things,and cause real mental phenomena in a way that a models of neurons can’t. A model of a river will never get wet. A model of a neuron will never cause mental phenomena.

    As Searle recognised later, The Chinese room wasn’t damning enough of computationalism – it never examined, for instance, the ludicrous suggestion that we all do things in steps, like a state machine, in the first place. No example of it in nature, yet everybody talks about it.

  25. 25. Hunt says:


    Or bits of cheese that mice haven’t eaten yet.

    lol. Well, no need for sarcasm. 🙂

    There are no causal constraints on the object of a model. So a neuron can do things that models of neurons can’t – they can occupy space, possess mass, have causal powers. Neurons – or rather collections of them – can understand things,and cause real mental phenomena in a way that a models of neurons can’t. A model of a river will never get wet. A model of a neuron will never cause mental phenomena.

    A mechanical hand controlled by a simulated reflex response can pull the hand away from a stovetop. That seems pretty “causal” to me.

    The “simulation of a rain storm will never get you wet” argument falls flat with me. The passage in Peter’s book was good. (another of my attempts to implicate others in my argument:)


    “…the rain argument against simulation is not quite as strong as it seems at first sight. It’s true that a simulated rain storm does not deliver real water, and a simulated journey does not take us to Paris; but what about a simulated calculation? If we set up a simulation of someone adding two and two, and our simulation is any good, we’ll get the simulated answer four. But that is the answer; the actual answer; it’s no good claiming it’s somehow not the real four, or that the arithmetic is invalid because the simulation had no understanding and did not really calculate in the full human sense. Four just is the answer. So is it that simulating real world, material things does not produce them; but simulating informational processes can produce the real information? Yes, but there’s a deeper point about the nature of simulation. In essence, a simulation reproduces the thing simulated, but leaves out those of its properties which are not relevant to our current purposes; or if you like, it’s a reproduction of only the relevant properties. This is why a simulated sum works; the only relevant properties are the arithmetical ones that the simulation deals with. Presumably, at least: but we could have a ‘sum-doing’ simulation that, for example, merely chalked up numbers and symbols on a board at random. In a way, that would simulate the performance of arithmetic, it’s just that it simulates properties of the activity that are irrelevant for typical purposes (they might be relevant for, say, staging a play). Computers deal in information, so computer simulations tend to deal well with projects which are about information. However, if we give them a hosepipe peripheral, they can also simulate the wetness of rain perfectly well. The question then, might be whether human cognition is more of a material entity which needs special technology to simulate it or an informational one of the kind which a standard digital computer could handle well on its own; more like the hydration of a rainstorm or more like the calculation of a sum. Many of the key activities of consciousness – thinking, believing, reasoning – seem to have information processing as an essential feature, so it is a natural intuition, putting Penrose’s logical case aside, that they would be handled well by a computer.”

    The Shadow of Consciousness: A Little Less Wrong

  26. 26. Hunt says:

    Largely, calling neurons “symbol processors” seems fair game to be due to the syntax/semantics dichotomy. Few people regard them as doing anything semantic, except perhaps for those who think a carrot can scream. Few think neurons deal with meaning. They have coded input at dendrites and emit coded output along axons, accepting the oversimplification this undoubtedly is. That is approximately symbol processing. At least, I don’t think it’s an outlandish working hypothesis.

  27. 27. Jochen says:

    Hunt, I agree that the ‘you don’t get wet from a simulated rainstorm’-argument doesn’t really hold any water (heh). If one believes that the simulation is as good as the real thing (if one is, for instance, of the opinion that the universe might be a simulation, or some such hypothesis), then one could just hold that the user of the simulation doesn’t have the right causal connection to the simulated rain to get wet—iow, that if you were instead an element of the simulation yourself, you would just get as wet as in a ‘real’ rainstorm.

    The real problem, as I see it, is that a simulation of a rainstorm isn’t a simulation of a rainstorm out of itself—it needs an observer’s interpretation to become that. The same thing goes for a program that carries out arithmetic: the program does not carry out the operation ‘2 + 2 = 4’ intrinsically, but rather, its output must be interpreted as pertaining to that operation. At the bottom, it’s just voltage patterns being shuffled around, whose interpretation might just as well be ‘3 + 3 = 6’, or indeed, virtually anything else.

    So the difference between stuff and models of stuff is that stuff is what it is (Peter’s haecceity, or something near enough), while models have to be interpreted as bearing any relation to what is being modeled. So using the model to try and explain the real goings-on in brains invokes the question: who (or what) does the interpreting? And that way, of course, the homunculus looms.

  28. 28. Alec says:

    A speculation: From what little I understand of “quantum computing” it seems that (at least in some applications) it could be indistinguishable from intuition. In effect, the computer would appear to be capable of “just knowing.” This also suggests that there could eventually be a form of AI that humans can only weakly emulate and never fully comprehend.

  29. 29. Hunt says:

    Jochen,

    The real problem, as I see it, is that a simulation of a rainstorm isn’t a simulation of a rainstorm out of itself—it needs an observer’s interpretation to become that.

    How much of this is just an artifact of the word “simulation,” which implies an observer? If you just consider it a “creation” doesn’t the problem disappear?

  30. 30. Hunt says:

    Alec,
    I’m not really qualified to comment, but you mean “just knowing” would be some kind of superposition state?

  31. 31. john davey says:

    Hunt

    A mechanical hand controlled by a simulated reflex response can pull the hand away from a stovetop. That seems pretty “causal” to me.”

    That is because the mechanical hand is a physical object and can move. Do you think that it would move in the absence of a mechanical hand ? That the model of the movement of the hand – that the mechanical hand is driven from – would “move” regardless of the presence of the mechanical hand ? That would be taking AI to supernatural heights..

    Although my point there was really about mental phenomena. Matter causes mental phenomena, but in a way that doesn’t allow physics models to predict it (at least in its current state). But the fact that physics can’t predict it is causing some to think that the mental phenomena don’t exist or – in the words of Peter’s quotes about the rainfall simulation – are purely “informational”, allowing a modelling and representation. I think this is hubris, and a triumph of the success of still-prevalent 18th century enlightenment drivel. The very idea that human cognition might have scope and limits – that physics might not be able to predict everything – is not even seriously entertained. I even think there are various alleged atheists that believe, in a peculiar way, that physics is not man-made, but some kind of mathematical divinity.

    Peter’s quote seems to sum up the problem for me. It’s nothing really to do with the rainfall simulation, it’s more to do with the general computationalist issue. You either think a brain state is more than maths or you don’t. It’s only mathematicians and computer scientists that seem to really believe the former, probably because they’ve never studied physics or sat down to think about how exactly you would sit down to representing the feeling of wanting to cry, to defaecate or being overwhelmed by ennuyeux, and how little those feelings have to do with each other.


    Largely, calling neurons “symbol processors” seems fair game to be due to the syntax/semantics dichotomy. Few people regard them as doing anything semantic, except perhaps for those who think a carrot can scream. “

    Fair game ? Whate kind of phrase is that ? This is science, not politics. You either have to have the science to back it or you don’t, and you don’t. Don’t feel bad about it, as nobody else has.

    As for the latter point, it demonstrates that perhaps you have still failed to grasp the difference between semantic any syntax as far as physics is concerned. A neuron is semantic, made of stuff. A symbol-processor model of a neuron is syntax – it exists in your head (if you’ll forgive the phrase), not in the real world. Collections of neurons can scream. They can feel it too. Carrots (as far as I’m aware) can’t, because they don’t have the cells that can do it. If you think brains and carrots have the same powers, it’s a low point for the logic of AI if you ask me. Although maybe Dan Dennett’s head is full of low-level vegetables, you may be right.

  32. 32. john davey says:

    “You either think a brain state is more than maths or you don’t. It’s only mathematicians and computer scientists that seem to really believe the former,”

    – sorry – should have said “latter” of course, silly me

    J

  33. 33. Arnold Trehub says:

    John,

    If a model of anything exists in your head, it exists in the real world.

  34. 34. john davey says:

    Arnold

    “If a model of anything exists in your head, it exists in the real world.”

    Depends who’s head it is .. yes, I suppose it does exist at one level – as an idea inside an idea-realising system such as a brain. The point is that a model has no existence outside of an observer-related context and has no inherent or natural/physical properties.

  35. 35. Arnold Trehub says:

    john,

    If a model in your head exists in the real world it must have natural/ physical properties. If your subjective model is overtly expressed it will have publicly observable properties.

  36. 36. Jochen says:

    Alec #28:

    From what little I understand of “quantum computing” it seems that (at least in some applications) it could be indistinguishable from intuition. In effect, the computer would appear to be capable of “just knowing.”

    Hmm, I’m not really sure I get your meaning here. But for any given quantum algorithm, there is a good reason why it yields the result it does—for Shor’s algorithm (prime factoring), for example, the result is encoded in a certain periodicity of the final state, which can be found via measurement (with a certain probability). In a sense, if you get the correct answer, then yes, there is no reason for that except that the measurement had the right outcome, so to speak; but this is not different from classical probabilistic algorithms. That there is a chance of getting the right outcome, however, is wholly non-mysterious (if you go through the trouble of working through the algorithm in detail).

    Hunt #29:

    How much of this is just an artifact of the word “simulation,” which implies an observer? If you just consider it a “creation” doesn’t the problem disappear?

    Well, without an observer, all you have is really just the voltage pattern being shuffled around, which doesn’t connect to anything besides itself any more than the symbols ‘apple’ connect to an apple, without being interpreted.

    An example of the problem (which I think I’ve given here somewhere before) is the following: suppose you have a device with two ‘inputs’ and an ‘output’. Note that a lot of interpretive work has already been done in assuming this: for instance, we have fixed a level of description—e.g. the device level, as opposed to the molecular level—and we have assigned the notions of ‘output’ and ‘input’. If you apply a low voltage to both the inputs, you will receive a low-voltage output; if you apply a low voltage to one input, and a high voltage to the other, you will also receive a low-voltage output; but if you apply a high voltage to both inputs, you receive a high voltage output.

    Now, what logical operation does this device perform? And note again that we have already expended more interpretative work: there are only two kinds of inputs and outputs we consider, we believe the device performs a logical operation, and so on. As before, we simply take that as given.

    The crucial realization then is that it’s still, even with all this prior interpretive work, completely undetermined which operation is being implemented. The answer depends on which interpretive mapping you use: if, for instance, you interpret ‘low voltage’ as a logical 0, and ‘high voltage’ as a logical 1, then the device performs the AND-operation; if you interpret ‘low voltage’ as a logical 1, and ‘high voltage’ as a logical 0, it performs the OR-operation. Nothing about the system as such implies any interpretation more than the other.

    Worse, in using one of the interpretations above, you have introduced further interpretative assumptions: that the mapping should be the same for outputs and inputs, for example, and that it should be the same for both inputs. Nobody forces you to do that: you could as well interpret ‘low voltage’ on inputs as 0, and on outputs as 1—then, the device implements the NAND-operation. In fact, the device can be considered to implement any Boolean function of two variables; hence, its computational interpretation is wholly arbitrary, and up to the observer.

    Now, any computer is, essentially, just a network of such devices. Since we can interpret any of these devices to implement any Boolean function, it follows that we can implement the whole network as implementing any arbitrary mapping between however many inputs and outputs we have; hence, what is being computed is solely due to choices in the interpretive mapping, and moreover, without such a mapping, the question ‘what is being computed?’ simply has no answer.

    Hence, computation is a completely observer-dependent notion, and thus, can’t serve to explain the observer themselves.

  37. 37. john davey says:

    Arnold

    “If your subjective model is overtly expressed it will have publicly observable properties.”

    And if it’s not expressed ? Does that mean it doesn’t have properties ?

  38. 38. Arnold Trehub says:

    john,

    If your subjective model is not overtly expressed, it will have biophysical properties that are not publicly observable.

  39. 39. john davey says:

    Arnold

    “If your subjective model is not overtly expressed, it will have biophysical properties that are not publicly observable.”

    Well in my understanding, if it’s not publicly observable it’s not measurable. Which means it has no properties. The brain may have neural states that indicate it is realising the idea, but they would be measurable. But what you’re measuring in that case is the realisation of the idea in that brain at that instant. An idea like “1” can’t be tied to a instance of the idea in a single brain.

  40. 40. Alec says:

    Thanks all for clarifying my thought. By “just knowing” I meant the outward appearance of intuition as in comment #10, which might well arise from a superposition state as opposed to a practically impossible conventional exhaustive search.

  41. 41. Arnold Trehub says:

    john: “An idea like “1” can’t be tied to a instance of the idea in a single brain.”

    Why not?

  42. 42. Hunt says:

    Jochen,
    It seems to me you’re describing “the implementation problem”:

    http://plato.stanford.edu/entries/computation-physicalsystems/

    The issue isn’t the capability of computation, you just don’t believe abstract computation can be instantiated physically, which is different than saying that computationalism is false. For instance, how does your argument not apply to uncomputable functions? A purportedly uncomputational mind, given an input, produces an output. The output from a uncomputable mind can be misinterpreted by an observer just as readily as the output of logic gate. E.g. tomorrow I could interpret “good morning” as “go to hell”. I think Kripke’s reply to the ambiguity of language is reliance on social convention, but now I’m really overstepping the bounds of my expertise. But doesn’t this kind of make sense?, the gap between syntax and semantics must be facilitated by rules outside the system under consideration.

  43. 43. Jochen says:

    Hunt:

    The issue isn’t the capability of computation, you just don’t believe abstract computation can be instantiated physically, which is different than saying that computationalism is false.

    But certainly, any computationalist account would have to first settle the question of implementation—after all, if you want to claim that *this* computation produces a mind, you have to somehow determine that a brain implements this computation. And this, it seems to me, is not computationally possible.

    For instance, how does your argument not apply to uncomputable functions?

    It does: this is why I need an account of intentionality first. With this, I can then identify meaningful symbols, that refer to things in the world; that have semantics, and aren’t mere syntax. Thus, the manipulations that are carried out on these symbols also acquire a semantic character—the symbol for ‘apple’ might combine with that for ‘tree’ to form ‘apple tree’, without (ideally) there being any freedom of interpreting it as something different. However, that account of intentionality can’t itself be computational, or else we’d merely kick the problem up the ladder.

    The output from a uncomputable mind can be misinterpreted by an observer just as readily as the output of logic gate.

    There’s really no ‘misinterpretation’ in a computational scenario, because this implies that there’s a ‘right’ interpretation in the first place. But if my above argument is right, then there isn’t one; all interpretations are just as arbitrary, and moreover, dependent on the external interpreter.

    So the idea is to get rid of the interpreter by using self-interpreting symbols, that supply their own meaning, and use those in place of the formal, syntactic ‘symbols’ that are used normally in computation, which always require arbitrary choices to correspond to any particular computation, or simulation, or what have you.

  44. 44. John Davey says:

    “john: “An idea like “1” can’t be tied to a instance of the idea in a single brain.””

    “Why not?”

    Because then we couldn’t communicate, as your idea of “1” would not be the same as mine. Can you conceive of different conceptions of “1” ? “1” is an idea in the public space. It’s not a private notion, any more than the english language

  45. 45. Hunt says:

    Jochen,
    You’re correct, of course. Inability to implement a computational mind would be a rather great impediment to computationalism. Not sure what I was going after there, perhaps that they’re separate, but related, questions. I think I’ll ponder this all some more. Thanks for clarifying things for me.

  46. 46. Arnold Trehub says:

    John: “Can you conceive of different conceptions of “1” ? “1” is an idea in the public space. It’s not a private notion, any more than the english language.”

    Yes, I can conceive of different conceptions of “1”. It is not an idea in the public space. It is a *symbol* in the public space that may have various publicly expressed conventions concerning its meaning. The meaning of words in the English language is constrained by our dictionaries, but may differ in some private way in each individual brain. The problem of interpersonal communication is not a settled matter. See The Pragmatics of Cognition in “Overview and Reflections” on my Research Gate page.

  47. 47. john davey says:

    Arnold

    You are going to have to give me a concrete example, I don’t understand.

    I understand that words may have nuances and slight differences of meaning in complex circumstances, but the possibility of communication is kind of prefixed upom the idea that words have the same meaning for anybody. Otherwise this communication, for example, would be rather difficult.

    For instance-what differences could there be between my understanding of “1” and yours ? I’ll need some kind of shape to grasp what you’re trying to explain.

    J

  48. 48. Arnold Trehub says:

    john,

    If we express “1” in math formalisms, I believe that there is considerable overlap in our understanding of the symbol. But I also conceive of “1” as a single instance of an exemplar, or as a symbol that represents the quality of something. The meaning of the symbol depends on the perceived context in which it is expressed and you and I may not be imagining the same context at the same time. This problem is compounded over individuals with different educational and life experiences.

  49. 49. Sci says:

    @ John Davey: “state machine states are incoherent and disconnected.”

    Can you elaborate on this? Is the idea here that even a translation program – to go back to the infamous Chinese Room – doesn’t actually have memory of the language between discrete states?

    I do wonder about how a physical equivalent of human translation manages to “know” anything? Wouldn’t this suggest knowledge of Chinese has some kind of isomorphism to a physical structure…no matter how that structure is realized. (Taking us back to the all the fantastical ways to implement a Turing Machine.)

    Seems the better path would be the eliminativist take, that even we don’t know anything let alone Chinese? Admittedly a claim that requires extraordinary evidence from science, yet at least it seems inline with materialism’s starting assumptions?

  50. 50. john davey says:

    Arnold

    ” The meaning of the symbol depends on the perceived context in which it is expressed and you and I may not be imagining the same context at the same time”

    OK .. but indulge me and give me an idea of how my “1” might be different to yours. Give me a picture.

    J

  51. 51. john davey says:

    Sci

    “Can you elaborate on this? Is the idea here that even a translation program – to go back to the infamous Chinese Room – doesn’t actually have memory of the language between discrete states?”

    Its more that there’s no connection between states other than in the eye of the observer. There is no reason, for example, why a state machine need have the same physical architecture between one state and the next, as long as the observer’s logical view of matters is consistent. IBM mainframe one state, desktop PC the next. They don’t have to be the same physical machine.

    There are usually harumphs about “impracticality” from the computationalists when you say things like this (as if “practicality” played any role in thought experiments) but in this case they would of course be wrong, as cloud-based computer resources are provided in exactly this way. The machines are virtual and can’t be said to be running anywhere – the decoupling of computer from the physical has never been more evident than today. Anybody using a cloud resource is appreciating at first hand that computation is a logical, not physical, resource.

    Computers than translate don’t “know” anything. The way I see it, the computer is not analogous to a sentient thinking object : it’s analogous to a value-added telephone and a value added book. A computer allows information to flow, in a structured but undetermined way, from the software author and the authors of the resources his programs use, over the media that the computer supports to users of that program. It’s still a person-2-person information flow, even if there’s nobody listening and there’s nobody sending. A book is still person-2-person communication even if nobody’s reading it.

    There isn’t even a need for computers to “understand”. They are playing the same role as gas molecules do when two people have a conversation in the same room. No-one would think that vibrating gas molecules “think” when they conduct sound. They are a medium of communication, and no more. And that’s what computers are, only with a few bells and whistles attached.

    As for the other point, I don’t think the Chinese room was as good as it could have been as analysis of it focussed unnecessarily heavily on the meaning of the word “understand”. I think it’s a vague (or rather, widely scoped) word but in this context at it’s very least it must include some ability to grasp semantic, to “know” what it is a word is referring to – which the Chinese evidently does not and can not.

  52. 52. Arnold Trehub says:

    john,

    I picture the Denver Broncos who just won the Super Bowl as number 1 in the NFL, versus you picture “a partridge in a pear tree” as an exemplar of 1 Christmas gift.

  53. 53. john davey says:

    Arnold

    No , I should have been more specific. What are the sorts of differences of the meaning of “1” that could arise between me and you ?

    JBD

  54. 54. Arnold Trehub says:

    john,

    You see “1” and think it means something in a formal math proposition. I see “1” and think it means that a team labeled “1” is the best team in its league. This is the kind of cognitive difference that can easily arise.

  55. 55. john davey says:

    Arnold


    john,
    You see “1” and think it means something in a formal math proposition. I see “1” and think it means that a team labeled “1” is the best team in its league. This is the kind of cognitive difference that can easily

    Yes – there are multiple meanings connected to symbols ( 7 meanings to the word ‘set’ in English ? ) – such is obvious – but I’m interested in your suggestion that specific ideas can be tied to specific brains.

    For instance, let’s take the idea of “1” in its mathematical, numerate sense, as in the integer before 2.

    I would like to know how this particular notion could differ between one brain and another, such that an idea could be said to be tied to one brain and can’t be a public one.

    If you remember, originally we were talking about mathematical models, and your suggestion was (unless I misunderstood) that such models are not public, but private and different between one person and another.

  56. 56. Arnold Trehub says:

    john: “For instance, let’s take the idea of “1” in its mathematical, numerate sense, as in the integer before 2.”

    I agreed that if we restrict our understanding of “1” just to the math concept that you propose, there would be considerable overlap of our private models. The problem arises because there is often no such inter-subjective agreement about the referents of our expressed words. This problem has surfaced in another forum about the referents of the word “perception”.

  57. 57. john davey says:

    Arnold

    “I agreed that if we restrict our understanding of “1” just to the math concept that you propose, there would be considerable overlap of our private models. The ”

    How can there be an “overlap” without the idea of a common reference frame ? Your answer implies that “1” (in the mathematical sense) is most definitely a public idea, or “overlap” makes no sense.

    JBD

  58. 58. Arnold Trehub says:

    John,

    My use of “overlap” was to suggest a common reference frame. I thought that would be understood. An example of the problem of inter-subjective agreement of the meaning of words and symbols?

Leave a Reply