Picture: chess with a machine. Kenneth Rogoff is putting his money on AI to be the new source of economic growth, and he seems to think the Turing Test is pretty much there for the taking.

His case is mainly based on an analogy with chess, where he observes that since the landmark victory of “Deep Blue” over Kasparov, things have continued to move on, so that computers now move in a sphere far above their human creators, making moves whose deep strategy is impenetrable to merely human brains. They can even imitate the typical play of particular Grandmasters in a way which reminds Rogoff of the Turing Test. If computers can play chess in a way indistinguishable from that of a human being, it seems they have already passed the ‘Chess Turing Test’. In fact he says that nowadays it takes a computer to spot another computer.

I wonder if that’s literally the case: I don’t know much about chess computing, but I’d be slightly surprised to hear that computer-detecting algorithms as such had been created. I think it’s more likely that where a chess player is accused of using illicit computer advice, his accusers are likely to point to a chess program which advises exactly the moves he made in the particular circumstances of the game. Aha, they presumably say, those moves of yours which turned out so well make no sense to us human beings, but look at what the well-known top-notch program Deep Gambule 5000 recommends…

There’s a kind of melancholy pleasure for old gits like me in the inversion which has occurred over chess; when we were young, chess used to singled out as a prime example of what computers couldn’t do, and the reason was usually given as being the combinatorial explosion which arises when you try to trace out every possible future move in a game of chess.  For a while people thought that more subtle programming would get round this, but the truth is that in the end the problem was mainly solved by sheer brute force; chess may be huge, but the computing power of contemporary computers has become even huger.

On the one hand, that suggests that Rogoff is wrong. We didn’t solve the chess problem by endowing computers with human-style chess reasoning; we did it by throwing ever bigger chunks of data around at ever greater speeds.  A computer playing grandmaster chess may be an awesome spectacle, but not even the most ardent computationalist thinks there’s someone in there. The Turing Test, on the other hand, is meant to test whether computers could think in broadly the human way; the task of holding a conversation is supposed to be something that couldn’t be done without human-style thought. So if it turns out we can crack the test by brute force (and mustn’t that be theoretically possible at some level?) it doesn’t mean we’ve achieved what passing the test was supposed to mean.

In another way, though, the success with chess suggests that Rogoff is right. Some of the major obstacles to human-style thought in computers belong to the family of issues related to the frame problem, in its broadest versions, and the handling of real-world relevance. These could plausibly be described as problems with combinatorial explosion, just like the original chess issue but on a grander scale. Perhaps, as with chess, it will finally turn out to be just a matter of capacity?

All of this is really a bit beside Rogoff’s main interest; he is primarily interested in new technology of a kind which might lead to an economic breakthrough; although he talks about Turing, the probable developments he has in mind don’t actually require us to solve the riddle of consciousness. His examples; from managing the electronics and lighting in our homes to populating “smart grids” for water and electricity, helping monitor these and other systems to reduce waste” actually seem like fairly mild developments of existing techniques, hardly the sort of thing that requires deep AI innovation at all. The funny thing is, I’m not sure we really have all that many really big, really new ideas for what we might do with the awesome new computing power we are steadily acquiring. This must certainly be true of chess – where do we go from here, keep building even better programs to play games against each other, games of a depth and subtlety which we will never be able to appreciate?

There’s always the Blue Brain project, of course, and perhaps CYC and similar mega-projects; they can still absorb more capacity than we can yet provide. Perhaps in the end consciousness is the only worthy target for all that computing power after all.

41 Comments

  1. 1. Doru says:

    I am doing an experiment trying to see if I can buy AI in the form of intelligent feedback comments.

  2. 2. Kar Lee says:

    Peter,
    I believe some (perhaps many) people will actually fail the Turing test. So, passing the Turing test may not really mean much. It is quite an interesting thought to actually try this on ourselves and see how we fair….hmmm…

  3. 3. David says:

    This seems like a bit of a smokescreen to me, the biggest effect AIs will have will be in strategic planning; economic and political for sure, but military use will be the primary driving force behind any economic growth that results. Of course there will be more mundane lifestyle changes like the ones mentioned, but they will not occur until massive legislative changes have been made, before which time I think that an AI “brain race” conducted by military/private researchers will be by far the most important economic factor.

  4. 4. Vicente says:

    I completely agree with Peter.

    Since a few years ago, operations research and optimisation techniques are mature enough to cope with most strategic planning problems, and computing power is enough to run the most complex and data loaded algorithms. I think is a matter of political will and economical lobbies wars what is hampering the problems solution.

    The case of heavy scientific simulations like in: climatology, cosmology, atomic and molecular physics, etc, definitely require more powerful computing machinery to achieve results in a reasonable computing time.

    Just a remark, looking at the technology history we can see that many (most?) technological breakthroughs, that have opened new markets and foster economy, have usually come as a surprise, contradicting “gurus” of their time in many cases. So who knows what comes next. Whatever it comes, I hope it comes quickly considering the economical crisis we are going through.

    And, what is really AI in terms of computing? Where is the borderline between “ordinary computing” and AI? adaptive algorithms that learn from history or experience? fuzzy logic? at the end of the day you have a processor adding numbers according to an algorithm (programmed by a human being), just to add numbers or to choose the best move in a chess game, does it matter?

    I will not comment on any connection between AI and consciousness which I find absurd. If anybody compares two “deep blue” machines playing each other with two chess masters playing, it is because they are only taking into account a small part of what a chess game is about, the remaining part has to do with consciousness. The same would apply to two robots playing tennis, which in terms of computing is much more complex than chess, since it involves an infinite number of possible games, chess only involves a finite number of possible games.

  5. 5. Paul Bello says:

    I find it a bit humorous that you guys are talking about how AI *will* be commercialized. Fact is, it *has* been commercialized in the form of GPS, umpteen cellphone apps, computer games, all sorts of clustering algorithms applied to internet search, roombas and on and on… It’s just that when these technologies go big time, nobody considers them AI anymore. So the problem seems to me to be that the moniker “AI” is a bit of a moving goalpost.

    If you want to consider bona fide “intelligence,” I think we need to take that in terms of the only systems we know to possess it — that is, human beings with all of thier irrationality, etc. Now, instead of talking about AI, what we’re really talking about is computational cognitive modeling — reproducing human-level competency (and (mal)performance) in all of its robust, ugly glory. Indeed, this is a serious problem for those of us interested in the kinds of issues Turing introduced regarding how to evaluate these sorts of systems since it’s hard to set an a priori standard for the minimal set of tasks that ought to be sucessfully performed by such a system and the average rates at which performance comes to asymptote. Some folks have suggested psychological batteries (including IQ tests and other psychometric tools). I’m not sure I’m convinced. I’m not sure that evaluation is even possible.

    My sense is that the principle component by which we understand other things in the world to be of the natural kind “person” (of which there can potentially be serious uncertainties in the case of vegetative and comatose patients) is by way of simple perceptual clues — if it looks and acts like a duck, then… Of course we expect things that look and act like ducks to also quack like ducks, so the Turing-style evaluation of communicative capacity is important, but may be a second-level sort of test after we’ve established that what we’re seeing ought to be able to communicate like a person. We see exactly this kind of thing with human infants — they use skin texture and perception of biologically plausible ballistic motions to classify human reaches toward a goal object from similar robotic reaches.

    All of this aside, I don’t suspect it’s computational power that is at issue with why we haven’t built intelligent machines. After all, people still use the ancient “blocks world” task to evaluate certain planning systems, and they still can’t handle large amounts of blocks or any serious dynamism after 30-odd years of research. Sounds to me like an algorithms issue. One has to suspect that brains evolved in hostile environments, where much reasoning needed to be fast (and probably computationally cheap) — so more processing power may really be the wrong tree to go barking up.

  6. 6. Lehooo says:

    I don’t know if it is correct to say that the Turing test is meant to test whether computers could think in a human way. As I understand it, it is meant to test whether another human could tell whether he or she was conversing with a human or a computer. In other words, it is a test of the behavior rather than the thought processes.

    In any case, a brute force solution to creating an AI capable of passing the Turing test seems unfeasible. The only reason a brute force approach to playing chess seems to work, is that it is trivial to work out the possible moves at any given time, an probably fairly easy to create an algorithm to determine whether one move will be better than another. All you have to do is explore the possible moves, look at the state of the game after such a move, and determine whether that state will be closer to a winning situation for you. Human interaction on the other hand, would seem less amenable to a brute force approach. Just working out all possible “moves” at any given time in a conversation seems to require a ridiculous amount of calculations, so even if this is technically solvable by a brute force solution, the computing power necessary seems almost beyond imagination. Regarding the problem of determining the future outcome of the move, which in an attempt at fooling a human into believing that you are also human would probably consist in evaluating whether the move will make the human more, less or just as likely be fooled into thinking that you are human than before the move, seems to be a problem that simply will not be solved by a brute force solution.

    Until we have a theory of how a human goes about interpreting the statements of person it converses with, we will not be able to describe an algorithm for determining how good or bad a certain statment will be from the computer’s perspective, and once we have such a theory, the brute force solution will not be necessary, we will have a model of at least the part of the human mind that handles conversations, and so we could create an AI that would be able to pass the Turing test in a much better way than with brute force. At least that is how it seems to me.

  7. 7. Shenonymous says:

    Since AI’s exploding on the social scene, I’ve been intrigued by the promises of AI. When an AI can be constructed that can solve some of the really big social issues, like what to do about world conflicts in particular places, i.e., the Middle East, Sri Lanka, etc., and how to effectively neutralize the enmity between the Democrats and Republicans in America, that is, what steps ought to be taken to actualize these solutions, provide an algorithm, then I will be utterly impressed. Won’t you?

  8. 8. Lloyd Rice says:

    What do we do with massive computer power?

    Or is that really the question? Is there another way to “compute” consciousness?, to make computers “seem” more like living things?

    The latter question seems to come up more and more often. Two research camps appear to have in common the idea that massive power is not the answer. MIT’s Rodney Brooks and CalTech’s Carver Mead would both set aside the massive power in favor of a more nuanced approach.

    Paul’s last paragraph in comment 5 seems to refer to this dichotomy. Perhaps CYC is not the answer. Could it be that when it comes down to the final analysis, Blue Brain will discover a network of clever circuits connected together in clever ways that we might never have thought of? Peter, you refer again to the frame problem. I submit that if there is such a problem, it will succumb not to massive capacity, but rather to a more nuanced approach.

    My minor observation on the matter is this: For several years I have been studying Anderson’s ACT system at CMU. Anderson has devoted a great deal of his energy to making sure that the system is constrained in ways that cause its performance to parallel that of human subjects. Memories are hard to reach and die out if not used. Procedures get reinforced if they are successful and fade away if not. A recent thought of mine went more or less like: Why not dispense with all of these apparently artificial limitations? Why not just let the computer do all it can do? After all, I’m not constrained by the politics of a psychology department.

    But, if I read one of Anderson’s recent books correctly, it seems that the built-in limitations, automatic fade-outs, etc. are actually integral to the learning process. Sure, massive power can do whatever it does faster, but it may be that our intellectual capacities in fact arise from the very constructions which seem to be among their most severely limiting constraints.

  9. 9. Vicente says:

    It might be useful for the discussion to make clear the difference between: AI, as the development of advanced algorithms (programmes, applications,…) that try to mimic human way of tackling complex problems, and AI as a research field that tries to explain human cognition features and brain functions using computer models. Despite they are related, they are quite different in their contents and aims. I think the text refers to the first case, that can progress independently of the second one.

  10. 10. Shenonymous says:

    Mr. Rice, you ask, “Why not dispense with all of these apparently artificial limitations? Why not just let the computer do all it can do?” What do you imagine a computer is capable of doing, on its own volition? This question is related to Vincente’s comment at #9 the implication that an AI as expressed in a computer would have ‘aims’ that I find odd. I have no experience with AI except the ordinary layperson who uses a computer and some software, but no direct computational software. I know virtually nothing about programming except a very superficial recognition of what it might look like from having learned even more rudimentary than basic an eon ago. So consider me very naive about this subject. I’ve never been interested in that side of computing. I just love what it does. I have a hard time perceiving that a computer would ever take over the function of intentionality and teleology without human instruction.

  11. 11. Lloyd Rice says:

    OK, Shenonymous: I see I have to be more careful in this “AI sensitive” environment of talking about what a computer can do. Here, I was simply saying that if the program was written so as to maximize memory accesses and rule applications and not have to deal with decaying memory activity ratings and rule expiration coefficients, then the massive computation power could be put to better use. What I did not realize is that the decays and expirations appear to be an integral part of the way the memory and rules must be organized in order to provide the appropriate learning environment.

    However, as to your other point about what kinds of things the computer ultimately can or cannot do, I would only point out that the whole point of trying to understand how to write a computer program that can learn deals with understanding what learning really is and how that relates to teleology, self-organization and, ultimately, intentionality.

    I believe we are on the threshold of learning a great deal about what learning really involves. How it can be that a newborn chimp or human can “learn” so easily? What innate structures give rise to this capability? And how does it all happen? I just saw the “chimp” episode of Alan Alda’s “Human Spark”. The subtleties are fantastic.

  12. 12. Vicente says:

    Lloyd – ¿A new born chimp or human can learn so easily? ¿compare to what? not to say that a chimp a human differ by orders of magnitude in what to learning concerns.

    I don’t see how anybody can deem intelligent simple processes that occur in a computer processor, according to a human writen algorithm, wether your reinforce or you weaken rules, or make memories decay or not is irrelevant, is part of the algorithm and eventually is based on bytes words moving from one register to another according to simple assembler code. Computers don’t support phenomenological experiences(well not more than a stone in a field), that make meaningful the idea of learning.

    Can anybody explain me, how a certain memory (anything) is stored in the brain? let’s keep it simple: can anybody explain me how “2” is stored in the brain. I can perfectly explain how “2” is stored in a digital memory. This kind of problems should be solved before anybody could claim that AI is of any use to understand consciousness and brain functions. Mind you, I am not saying that the brain does not store memories by some physiological mechanism, there is strong evidence of the effect that brain damage can have on memory. This has to do with the binding problem.

    By the way, “2” is a conscious concept, there are no “2” in the computers, in the computer there are only electronic components in different states, like a stone in a field. Like in a computer screen there are no texts or images, the texts and images are in your mind, on the screen there are only different brightness areas, like on stone in a field.

  13. 13. Lloyd Rice says:

    Vicente: Certainly computers “move bytes from one register to another”, etc. etc. I do not question that. But we do know how to make computers that can recognize faces, words, etc. Suppose you have a simple visual processor that can focus on a simple scene showing a workspace. You place one block on the table and type in “1”. You place two blocks on the table and type in “2”, etc., etc. The program has been designed so as to associate patterns extracted from the video input with arbitrary keyboard input. Can you then not say that the typed input “3” has the “meaning” of the image of a tableau containing a triple of objects? If not, what else does “mean” mean?

  14. 14. Lloyd Rice says:

    Perhaps I should not have used the characters “1”, “2”, “3”, etc. to make my point. Obviously, “A”, “B”, “C” or “?”, “%”, “@” would work just as well. What the program does is to learn to associate one input with another. What we need to do is to get down to the basics of just what it means to do that. How does that compare to a baby looking at a block and hearing the word “block”?

  15. 15. Doru says:

    HLloyd,
    I always say to myself, “Nothing in the world has any meaning other than the one you put on it”. Formal systems and processes (electronic circuits with associated software) cannot learn because essentially they have to stay bounded within their formal rules (they are essentially closed to the environment, see Godel).
    From what I recently saw at Ramachandran, human learning has to be of memetic nature. Through mirror neurons, selective imitation gets transferred horizontal much faster than evolutionary learning.

  16. 16. Lloyd Rice says:

    Doru: Don’t be so sure about meaning. First of all, Goedel does not apply because those conclusions apply to a fixed function, in software terms, a pre-programmed system with preset, fixed input. As soon as you allow open-ended, real-world inputs, the logic no longer applies.

    But the rest of the issue has to do with what it means to learn. I certainly agree that a human has a wide variety of “auxiliary” systems, such as the mirror system, which enhance learning in various ways. Another vital system would be the emotional responses mediated by the amygdala. Not to mention the complex ways that memories are maintained by the hippocampus. Some of these systems have been partially simulated in various contexts, but to my knowledge, no one has attempted to program a memory system that would include more than a few, limited examples of such enhanced capabilities. And no computerized version of a human mind without all of these “add-ons” could be said to be complete. That is a tall order. Still, I will argue that each of these auxiliary systems is, in principal, programmable in the same ways as “basic” memory itself.

    My claim is that the basic learning process is comparable between software and living things. What do I mean by “comparable”? For example, I noted above the issues of memory decay and fading relevance appear to be an integral part of the learning process. I claim that it is essentially irrelevant whether these details are implemented by neural firings or transistor storage. It is the system-level organization that must be compared. We need to fully understand the learning process, but I see no reason that cannot be done with computational architectures.

  17. 17. Paul Bello says:

    Hi Lloyd,
    I’m quite familiar with ACT-R and its variants. Firstly, just to clarify what I meant in my initial post — it’s probably not so esoteric as what you’ve suggested. CYC is an effort to knowledge-engineer everything in existence. This doesn’t bear on computational power (i.e. RAM and more processor power), which I don’t think has really affected the bottom line in achieving human-level AI. Most of us who do both computational cognitive science (ala ACT and it’s competitors) and/or AI know what some of the deep problems are. Mostly I’ve found that our desire to build systems that we can prove things about (a rather dubious desire in this context, from my perspective at least) has gotten in our way of building systems with more human-like capability. The breath of fresh air from the ACT folks (and others) is that they don’t attempt to provide normative explanations of “intelligence” with requisite proofs. They adopt the eminently pragmatic strategy of studying and trying to replicate the goods and bads of the only system we can all agree is intelligent — the human.

    I’ve been trying to follow this discussion of memory, decay and the like. ACT, Soar and the rest don’t assume “deletions” of items from memory. The decay parameters in ACT-R exist to make sure that some memory items that are most relevant to the task at hand are “active” and available for use relative to what we know about how many items humans typically are able to entertain. However, nothing ever gets “deleted” — just goes below the activation threshold. Since activation in ACT-R only spreads one level deep from a particular item in semantic/declarative memory, this provides the bound on over-searching. Its unclear as to whether we as humans have any real “deletions” from memory either, as far as I know. The activation/decay scheme are crucial to modeling semantic priming, fan effects in memory and others…they are modeled with simple tree-like structures that capture associations between memory items, and numerical values associated with how “active” each element is. Since we haven’t neuroimaged or single-cell recorded declarative memory, or activations, or threshold parameters, it’s tough to say that this kind of computer learning is comparable to biological learning in a direct sense. The only real evidence we have is how this particular machine learning scheme produces behavioral data (and errors) in simulation. So even at the “system level” we don’t yet have an analogue. It does capture lots and lots of data very well though.

  18. 18. Lloyd Rice says:

    Paul: I agree entirely. My plan is to use what I can of the ACT system to implement a “cognitive” core for a conversational dialogue system. Thus, I am not concerned with the emulation of human-like timing and performance, for which so much effort has gone into the ACT system. But I do want to retain the learning capabilities and the generality for which ACT seems rather well suited. For example, I have given some thought to a different rule compiler mechanism which would have access to “external” system-level information not available to the ACT rule builder.

  19. 19. Vicente says:

    Doru:

    “From what I recently saw at Ramachandran, human learning has to be of memetic nature. Through mirror neurons, selective imitation gets transferred horizontal much faster than evolutionary learning.”

    So how new knowledge is produced? mimetic strategies are only a part, the one to do with culture heritage. The nice one, has to do with creative processes, opposite to mimetic.

    I deduce from my readings, I might be completely wrong, that mirror neurons system is related to social sympathy, and understanding others minds, rather than with learning, isn’t it?

    – You said: “I claim that it is essentially irrelevant whether these details are implemented by neural firings or transistor storage.”

    storage in electronic memories is well understood, you must understand how storage is achieved throuhg “neural firings” before making your claim. Can you explain how “2” is stored in terms of “neural firings”? otherwise you are saying: I claim this is this way I just need and explanation for it.

    Of course, I agree that computer models can be used to understand how neural networks work, or to simulate synapsys and many other things, but they shed no light on the consciousness problem.

    Lloyd- from #13. The computer does not recognise anything, it produces a result that “you” interprete as face a recognition. The computer does not “know” if it is a face matching or a radar echo matching to detect a friend or foe fighter, or so many other pattern recongitions.

    When a baby looks at block and hears the word block, there is a whole phenomenological experience based on qualia behind. Second, one thing is the object block, and another the subjective concept of block. On thing is the sound of the word block, and another the “word” block. I think is better to look at how a baby learns to walk, to play tennis, or to play quitar, those are trial/error processes much closer to computer “trainning”.

    “Can you then not say that the typed input “3? has the “meaning” of the image of a tableau containing a triple of objects?”

    I can, the input “3” o “C” is meaningless for the computer, because everything is meaningless for the computer.

    Computers are just tools to help human brain do things faster, or automise processes, in the same manner bycicles are a tool to make human locomotive system faster.

  20. 20. Vicente says:

    Regarding speech simulators and Turing tests:

    Let’s take you have an speech simulator without any random number generator involved in the algorithm, would it be possible to always predict what is going to be the answer to a particular input, according to the algorithm?

    Would that be possible in the case of a real person speech?

  21. 21. Doru says:

    Lloyd,
    I am yet to find an unbounded system that works.
    It means that external inputs will somehow be able to change the original programming into new program loops that will interpret the “meaning” of the new inputs. It means that the program is nondeterministic which I don’t think will ever work.
    Vicente,
    It seems that the argument here is Creationism/Intelligent Design versus Mindless/Lamarckian/Heuristic Evolution. I am more curios on the second view.

  22. 22. Vicente says:

    Doru: Not at all, I am not a Creationist/Intelligent Design follower, which I believe sort of superstition when applied to nature.

    But it is funny that you say that, because if Creationism/Intelligent Design can be applied to anything, that is definitely computers, which are “CREATED” by the “INTELLIGENT DESIGN” of human engineers.

    Can you explain me how the brain stores “2”?

  23. 23. Paul Bello says:

    Hi Lloyd,
    V. interesting! I have a friend who works here with me in DC who did a little bit of work on speech acts using ACT-R embedded on a robotic platform. See http://csjarchive.cogsci.rpi.edu/proceedings/2009/papers/674/paper674.pdf for details. Please keep me abreast of your progress. I’d be very interested to hear. I myself am somewhat skeptical about ACT-R (as it stands) for capturing pragmatic phenomena for a whole bunch of reasons you can feel free to ask me about offline. While I don’t develop computational linguistic models myself (at least not yet), I do work in a related area — modeling speaker beliefs, intentions, etc — all of which you need to disambiguate ambiguous utterances.

    Doru, Vincente: There is no gold standard for meaning. Some philosophers are convinced that meaning corresponds to truth conditions — others are total relativists about truth. Some feel that meaning can only “exist” in the mind of someone interpreting a proposition, and some think that all meaning is “out there in the world.” What I’m getting at is that the discussion you’re trying to have is only interesting on some mutually agreeable definition of what it means to mean something…which it’s not clear to me that you two share. What’s unhelpful though is making claims to the effect that representations (in computers) are “meaningless” without qualification.

  24. 24. Vicente says:

    Paul- what does it mean “without qualification”.

  25. 25. Lloyd Rice says:

    Vicente: I believe some of your comments directed to Doru in fact made reference to things that I had said. Specifically, regarding how “2” is stored in neurons and that it is irrelevant whether it is neurons or transistors that do the work. There is a lot in the last half-dozen comments that I want to get back to. But for now, just let me say that whether evolution sees it that way or not, it still makes sense to separate the levels of a system in order to understand that system. This is relevant to my comments about how memory fades out in the ACT system. It’s not that the bits stored in transistors (or FET gates) fade out, but rather that some coefficient gets multiplied by 0.95 over and over again and when it gets small enough, the memory value will be ignored. Whether human memories are represented as patterns of synapses on an individual neuron or as patterns of firing in some loop of synaptic connections, or by some other architecture, can be and must be separated from questions about how those memories are used and what other brain regions they connect to. It’s not only possible, but mandatory, to be able to separate the levels in order to understand such a system, human or computer.

  26. 26. Lloyd Rice says:

    Doru: A computer operating system is unbounded in, I believe, the sense that you are talking about. As to whether a computer operating system does anything useful or not, might be questioned by some (snide remark), but it is open-ended in that the inputs are never known ahead of time. And yet, it goes through some sequence of steps based on whatever comes in the keyboard and mouse and network I/O ports and makes some decision about what to do next. Actually, very few computer programs are truly “fixed functions” in the sense required by Goedel’s logic demonstrations.

  27. 27. Paul Bello says:

    Vincente:
    When I say “without qualification,” I just mean that if we’re going to talk about meaning, we should talk about various theoretical positions and what they entail. Instead, it seems like what I got out of the thread of discussion above is that “meaning” to a computer could never be “meaning” in the way that humans are able to discern/possess it. Proposals about meaning cut across many areas of philosophy and psychology — we should bear in mind that many people have built whole careers in academia studying meaning, and we’d do ourselves a favor by being aware of the issues before making too many pronouncements about what meaning is and isn’t for a computer (or a human for that matter).

  28. 28. Vicente says:

    Lloyd thank you for your clarification, I understood it in the first place, it is like in expert systems based on rules and genetic algorithms, when the use of a particular rule produces a positive effect then you reinforce that rule (increasing the corresponding coefficient or weight), and the other way round. When the coefficient of a certain rules gets below a fixed threshold you remove that rule from the rules database, and you add a new one, usually combination of existing succesful rules, and some unsuccesful (for some reason I don’t get adding unsuccesful rules to the generation of new rules improves the evolution of the solution) with average weight to the database.

    My point is not technical, it’s philosophical.

    I would just like to make clear that I believe that research in AI and development of systems that try to explain and ressemble human behaviour is great from any point of view, and of extreme value for the knowledge of humankind, and will probably bring enormous benefits with new applications. If I have ofended anybody I sincerely apologyse, but I stick to my points, that in the particular field of consciousness are as valid and fair as anyone else (that understands the problem and has appropriate background, in terms of physcics, computer science, physiology, philosophy, etc which I believe I have enough at least to exchange ideas and opinions). My background is physics (electrodynamins and biophysics mainly), I have also taken computing and control and spend sometime in industry developing optimisation systems for energy facilites. I have been working in government technology R+D programmes assessment and funding for a few years already. The field of consciousness (hard problem/qaulia)problems, is the only one in which in real terms we all know the same for the time being, which is nothing. Mind-body and consciousness problem is my hobby since many years ago,

    Paul- AI is a very interesting field to be proud of, even if you don’t play to be God.

  29. 29. Lloyd Rice says:

    Vicente: A few comments back, I listed a couple of the “add-on” functions without which the human mind would be a hopeless mess: the mirror system, emotions, memory control, etc., you mentioned a couple of others. Another essential addition to the list would be the attention system. I believe this system plays a major role in constructing and maintaining a world model, without which there can be no consciousness.

    Peter: I believe the attention system is (at least part of) the answer to your frequently cited concerns about how to deal with the “frame problem”. I just looked up some of Minski’s early references in which he describes the “frame”, not as a problem, but as an answer to a problem. Others later thought there might be a problem.

    Another point made several times in this post is the relationship between mind and the motor system. In addition to it’s clear role in perception, I suspect that the motor system also plays a major role in building up the world model. Still, I stand by my claim that each of these systems can and will eventually be programmed and implemented in an artificially “intelligent” system.

  30. 30. Lloyd Rice says:

    Vicente: I wanted to say a bit more in reply to your point about how the brain stores memories. Even though there are many unknowns, there is also quite a lot that can be said. Referring to comment 25, I will discuss the architecture on different levels.

    At the lowest level, it is fairly clear that information is stored in the brain by place rather than by content as in a computer memory. Although a particular neuron may participate in the storage of multiple items, a given item of information will usually (allowing for long-term changes over time) involve a particular neuron or set of neurons each time that item gets activated. In a computer, a given item of information may get moved arbitrarily many times from place to place.

    At a higher level, it is equally clear that a given “concept” would usually involve several separate regions of the brain to store various aspects of the concept. For example, to store a “2”, the numerical concept itself may be in one or a few places, perhaps in the frontal lobes, the sound of the word “two” would be somewhere else, probably in the temporal lobe, the spelling in yet another place, the visual image of the written numeral somewhere between the occipital and the temporal, the visual image of two objects, somewhere else. All of these various regions would be linked by long-range axonal fibers such that they can mutually activate each other.

    This leaves the very interesting issue that we have discussed in an earlier post on this site, the question of single vs. multiple neuron coding. Again, much can be said, but that issue remains unresolved.

  31. 31. Antonio Orbe says:

    In the way Deep Blue defeated Kasparov and succeded in the Turing Test, much more interesting will be when the Blue Gene based Watson compite in Jeopardy. This is to be a much broader Turing Test.

  32. 32. Vicente says:

    Lloyd: Yes, I can get along with your points. I would like to react to some of them, trying to keep my comments short and bounded to the relation of AI, consciousness and memory.

    – Considering memory as data storage (simplified), it seems that humans do not store data directly, there is whole process of creating the memory, in which objective data input by senses are just a part, that is combined with self created data and modulated by emotions. Criminalistic psycologists have found evidence that witnesses in trials can declare observations completely false while being sure of telling the truth. Then it seems that memories are modified (not fading) with time, so the same experience can be recalled quite differently depending on the occasion. Finally, humans, opposite to computers haven’t got a systematic mechanism to retrieve memories, ask a studend (well prepared) while sitting an exam. So just I/O mechanisms (I don’t mention storing) are very different in humans and computers. I have not made on purpose any reference to conscious/subjective experience related to memory.

    – Biological substrate of memory. Yes, since the first experiments carried out with worms, very little has been achieved (despite the grear effort and fantastic job done by scientists). I am trying to do a bibliographic revision on this topic, but prelimary study shows that most proposed hypothesis are quite unsatisfactory yet (at least for the level of explanation I want).

    Regarding your comment “memories are stored by place instead of content”. In the case of memory semantic disorders is there really evidence of that, if so, it is really interesting.

    On the opposite, electronic memory substrate is perfectly known. This was the base for my comment to Doru, unless your treat human brain and computers in a “black-box” fashion, you cannot compare. You can do benchmarking but that’s all. The same applies to other cognitive functions.

    – Attention: Yes, I think it is the keystone of the whole thing. I fully agree, coordination and synchronicity are crucial for conscious awareness. I am working (with great limitations) in a model that could somehow be used to understand coordination and synchronicity mechanisms and time management in the brain. I am very curious in gamma wave patterns that seem to have a close relation with high focused attention states.

    Finally I would like to make a general remark for AI approach. Human brain is inclined to see conscious traits in anything that ressembles human behaviour, in the same fashion we tend to see a face on a stain on the wall the moment the stain slightly looks like a human face, or babies have to learn the difference between puppets and living beings. This is very dangerous when we sit in front of advanced antropomorphic robots equipped with sophisticated speech systems.

  33. 33. Kar Lee says:

    “..Human brain is inclined to see conscious traits in anything that ressembles human behaviour…”

    How true! Even with Win95 machines…. Bill Gates wrote in his 1996 book “The Road Ahead”, “We recently worked on a project in which we asked users to rate their experiences with a computer. When we had the computer the users had worked with ask for an evaluation of its performance, the people’s responses tended to be positive. But when we had a second computer ask the same people to rate their encounters with the first machine, the people were significantly more critical.”

    Many (though not all) human brains are hardwired to avoid hurting another conscious being’s feeling, and that seems to apply even to a Win95 machine (assume Win95 because the book was published in 1996).

  34. 34. Lloyd Rice says:

    As for the way computer memory works, I must point out that the stated descriptions are true, but need be true only at the lowest level of organization. The whole point of the way memory is organized in the ACT system is that it is flexible, it can be redesigned as needed. You can have fading, updates, context-sensitive effects, etc., etc.

    And I very much agree with the humanistic reactions that happen in various contexts. Most of that seems to be related to the mirror system, with significant inputs from the amygdala and various other subsystems in the brain. I can only repeat that I believe these systems are (in principle) programmable and that we will eventually do so, although I fear it will not happen on Kurzweil’s timetable.

    As for human reactions to computers, these are all very valid points. I do fear that our human responses to machines vs. living things vs. life-like computers will not keep pace with technology. We have only seen the beginning of that mismatch.

  35. 35. Adrian says:

    I don’t know whether to be amazed or frightened by the fact that computers can out think people. I wonder what affect true AI would have on the world, but I think it is still a long way off, if it is even possible!

  36. 36. Lloyd Rice says:

    In discussing the similarities and differences between computers and living things, I have often cited the car’s engine control computer as a step between dead machines and interactive software. Now we have this recall by Toyota in which the accelerator seems to stick. Lives have been lost. When I first heard of it, it seemed clear to me that the real problem was a bug in the ECU software. It will be most interesting to see if it turns out that way in the end.

    It would not be the first time software has killed people. And it sure won’t be the last. Yes, of course, this point seems to be a long way from the discussion about emotional responses.

    Peter, I think I like the new look. I cannot yet be sure.

  37. 37. Peter says:

    Thanks, Lloyd: I haven’t quite finished working on the refurbishment yet, and I hope there’ll be some further nice additions in due course. Comments very welcome.

  38. 38. Lloyd Rice says:

    The one issue I’ve seen so far is that I print out a lot of this stuff (Why, you say, would I ever do that??) and I have not yet figured out how to reduce the font size on the printed copy. The on-screen view is definitely better than before.

  39. 39. Vicente says:

    “…It would not be the first time software has killed people. And it sure won’t be the last…”

    Lloyd: Yes I am familiar with critical systems, and Safety of Life requirements in software, from nuclear plants to aerounautics, even in terms of liablity… this is the field where technology, legal issues and ethics will have to work hard.

    But think for each time software has killed people, how many times people has killed people… too bad huh.

  40. 40. seo says:

    Fantastic beat ! I would like to apprentice at the same time as you amend your site, how could i subscribe for a blog web site? The account aided me a appropriate deal. I were a little bit familiar of this your broadcast offered vibrant transparent idea

  41. 41. famil law says:

    Thanks for finally talking about > Conscious Entities ? Blog
    Archive ? Buy AI < Loved it!

Leave a Reply