Picture: AI resurgent. Where has AI (or perhaps we should talk about AGI) got to now? h+ magazine reports remarkably buoyant optimism in the AI community about the achievement of Artificial General Intelligence (AGI) at a human level, and even beyond. A survey of opinion at a recent conference apparently showed that most believed that AGI would reach and surpass human levels during the current century, with the largest group picking out the 2020s as the most likely decade.  If that doesn’t seem optimistic enough, they thought this would occur without any additional fundingfor the field, and some even suggested that additional money would be a negative, distracting factor.

Of course those who have an interest in AI would tend to paint a rosy picture of its future, but the survey just might be a genuine sign of resurgent enthusiasm, a second wind for the field (‘second’ is perhaps understating matters, but still).  At the end of last year, MIT announced a large-scale new project to ‘re-think AI’. This Mind Machine Project involves some eminent names, including none other than Marvin Minsky himself. Unfortunately (following the viewpoint mentioned above) it has $5 million of funding.

The Project is said to involve going back and fixing some things that got stalled during the earlier history of AI, which seems a bit of an odd way of describing it, as though research programmes that didn’t succeed had to go back and relive their earlier phases. I hope it doesn’t mean that old hobby-horses are to be brought out and dusted off for one more ride.

The actual details don’t suggest anything like that. There are really four separate projects:

  • Mind: Develop a software model capable of understanding human social contexts- the signpost that establish these contexts, and the behaviors and conventions associated with them.
    Research areas: hierarchical and reflective common sense
    Lead researchers: Marvin Minsky, Patrick Winston
  • Body: Explore candidate physical systems as substrate for embodied intelligence
    Research areas: reconfigurable asynchronous logic automata, propagators
    Lead researchers: Neil Gershenfeld, Ben Recht, Gerry Sussman
  • Memory: Further the study of data storage and knowledge representation in the brain; generalize the concept of memory for applicability outside embodied local actor context
    Research areas: common sense
    Lead researcher: Henry Lieberman
  • Brain and Intent: Study the embodiment of intent in neural systems. It incorporates wet laboratory and clinical components, as well as a mathematical modeling and representation component. Develop functional brain and neuron interfacing abilities. Use intent-based models to facilitate representation and exchange of information.
    Research areas: wet computer, brain language, brain interfaces
    Lead researchers: Newton Howard, Sebastian Seung, Ed Boyden

This all looks very interesting.  The theory of reconfigurable asynchronous logic automata (RALA) represents a new approach to computation which instead of concealing the underlying physical operations behind high-level abstraction, makes the physical causality apparent: instead of physical units being represented in computer programs only as abstract symbols, RALA is based on a lattice of cells that asynchronously pass state tokens corresponding to physical resources. I’m not sure I really understand the implications of this – I’m accustomed to thinking that computation is computation whether done by electrons or fingers; but on the face of it there’s an interesting comparison with what some have said about consciousness requiring embodiment.

I imagine the work on Brain and Intent is to draw on earlier research into intention awareness. This seems to have been studied most extensively in a military context, but it bears on philosophical intentionality and theory of mind; in principle it seems to relate to some genuinely central and difficult issues.  Reading brief details I get the sense of something which might be another blind alley, but is at least another alley.

Both of these projects seem rather new to me, not at all a matter of revisiting old problems from the history of AI, except in the loosest of senses.

In recent times within AI I think there has been a tendency to back off a bit from the issue of consciousness, and spend time instead on lesser but more achievable targets. Although the Mind Machine Project could be seen as superficially conforming with this trend, it seems evident to me that the researchers see their projects as heading towards full human cognition with all that that implies (perhaps robots that run off with your wife?)

Meanwhile in another part of the forest Paul Almond is setting out a pattern-based approach to AI.  He’s only one man, compared with the might of MIT – but he does have the advantage of not having $5 million to delay his research…

66 Comments

  1. 1. Lloyd Rice says:

    I see three clear levels of..should I say “enthusiams”..when I look at publications in three different places. The most excited I find here and most of the places linked to from here, JCS, etc. A lot of ideas, some wild, some reasonable sounding and, toward the “serious” side, journals such as Cognition, etc., doing serious science. But still rather far from the second tier, which would be the publications of IEEE members in the AI groups, mach learning, mach intell., etc. Still a lot of wild ideas and, most curiously, a kind of carefree attitude at times as compared to the third tier, the IEEE publications in other fields, particularly signal processing and medical computation. Absolutely down to business, hard-core math, etc.

  2. 2. Doru says:

    With lots of Field Programmable Gate Array (FPGA) design experience and as IEEE member, I was naturally intrigued by the RALA architecture.
    Passing asynchronously state tokens corresponding to physical resources through a lattice of cells, DOESN’T WORK.
    Sounds good in theory but I know few start-ups that failed to produce any working silicon, just wasted funds on untested mumbo-jumbo.

  3. 3. Lloyd Rice says:

    I was interested in the comments of Prof. Sharkey in the short video at the “got to now” link (in Peter’s first line above). I appreciated his careful skepticism and generally insightful remarks. However, one item particularly caught my attention. He cited one primary reason that no AI system will ever experience emotion, that being that emotions are based on chemistry. That is like saying that no video chip will ever work because the retina involves chemical reactions. Just because neurons are regulated by chemistry does not mean that the same effects cannot be achieved in other ways, using other technology.

  4. 4. Peter says:

    Doru – I know virtually nothing about RALA, but my impression is that it’s sort of Neil Gershenfeld’s baby. Certainly a quick google suggests that many of the papers on it were written by him. Nothing wrong with that per se, of course.

    Lloyd – I interpret Sharkey as more or less sharing Searle’s view – ie that there is some unknown property of biological stuff which is essential for consciousness. It’s a perfectly natural, scientific property, one which can be identified by further neurological research. We don’t have much idea of what it is yet, but we have enough of a feel to know that beer cans and string, or whatever, could never have it

  5. 5. Vicente says:

    Lloyd: I was also interested in: “… one primary reason that no AI system will ever experience emotion, that being that emotions are based on chemistry…”

    Of course the argument is not solid, but it is hiding that things are actually the other way round.

    The only reason for which we think that brains produce consciousness and phenomenological experiences is because THEY DO. No other reason.

    The reason for which I think that the sun raises on the East is because it does.

    So, why on Earth should anybody think that electronic circuitry (irrespective of the architecture) could produce consciousness. If we were talking about Babbage machines, with gears and mechanical wheels nobody would suggest such a thing, but for transistors and logical gates, ah, that is different. They only difference is that they are more complex and faster.

    I think in AI there is a lot of marketing, and a need to raise funds in competition with other projects. So if you just say you are doing computing, it is not as appealing as if you say you are trying to produce human like conscious machines… you are covering or disguising an ordinary product in some pompous title.

    I have a lot of experience in project proposal assessment and I know how prone we are to try polish things to make them shine and claim they are gold when they are tin.

  6. 6. Shankar says:

    I agree with some of the above comments. The claims seem more like vaporware to me.

  7. 7. Doru says:

    Peter: There is a lot of hype about software-less systems, massively parallel asynchronous computing and the speculation is done around the fact that biological organisms seems to be functioning that way. Humans do have two major synchronous clock: the nervous system (alpha waves at 40-60 Hz) and circulation system (heart bit 1-3 Hz).
    Lloyd: You brought a very interesting remark. I would say that emotion is not only based on chemistry, but also on motion. In other words I would associate the nature of qualia with chemical flow rather then computation which is something AI hasn’t addressed very clear.
    Vicente: The AI community do have a legitimate concern: the fact that Information technology may explode out of control (the singularity event).

  8. 8. Vicente says:

    Doru: yes, the technological singularity could be worrying, as so many other threats lurking round the corner. And AI has many legitimate claims, actually I really admire what these smart guys can do, really cool.

    But I don’t see any reason at all to believe those systems are conscious. I am very open to any sort of ideas and believes, and can see interesting points in almost any well presented approach to the mystery of our exitence. But this particular issue of conscious AI I cannot take it. An I believe there are some “ulterior motives” behind.

  9. 9. Paul Bello says:

    I’m almost scared to start typing for fear that I might not stop…

    Being heavily involved (and invested) in the “AGI” community and in the human-level AI enterprise more generally, I’d like to state for the record that none of you have anything to fear from the singularity. Most AI systems today can barely learn how to navigate a normal room populated with dynamic objects. For some pessimistic reason, I’m not under the impression that they’ll be learning how to perform surgery on themselves in case of a war with their human creators, nor do I suspect it matters that robots might “know” more than we do in the next few years. Dictionaries contain more facts than a typical human (which is why we use them), and are fairly innocuous artifacts. Sure, we could be worried about the fact that robots will be able to act in the world, but again, the only way they could do so is by having some sort of theory of action, which re-introduces the frame problem in all of its variegated forms.

    Few AI researchers have the philosophical training to care much about whether they are building conscious machines…most probably think qualia is an internet startup somewhere in silicon valley. The few that do have the background are generally of the overly optimistic variety, having made claims about the nature of intelligence since the 50′s, none of which have really stood up to scrutiny.

    But even if all of these non-conceptual difficulties could be overcome, we’re still ultimately left with the problem of how to understand each other as social creatures, and how to impart such an ability to a machine. Theory of mind is an absolute quagmire of different ideas, ill-specified proposals, conflicting empirical data and wild speculation. When one of the above folks can tell me what an intention or belief is, and where he can find it on an fMRI scan, I’ll be somewhat more confident, but still massively skeptical. The recent discovery of mirror neurons and their (possible) relation to simulation-theoretic accounts of theory-of-mind have opened many new avenues of discussion, as has the recent discovery that infants seem to be sensitive to false beliefs much earlier than had been thought. The whole discussion is being constantly redefined as we have more sensitive measures, and better understanding of relevant clinical conditions. In any case, there’s no theory for these fellas to implement, nor is there much prospect for a grand unification on the near-term horizon. It’s not that I’m cynical about the possibility, but we all have to be realistic at some level. AI done right, with the “hard” problem of consciousness left aside, will take many more years of effort.

  10. 10. Vicente says:

    Peter: I didn’t know if to put this stuff here or in your interesting stuff section, I believe it somehow could fit in both:

    http://noosphere.princeton.edu/

    The thing is I am extremely reluctant to accept any possibility of electronic devices producing consciousness. In this site I refer to you (I am sure you already knew it), they do it the other way round: The consider consciousness interacting (half-duplex not full-duplex) with electronic devices (causing a bias in random event generators), which for some reason I find more acceptable, surprising myself.

  11. 11. Vicente says:

    Peter: sorry, regarding #10, interaction is not half duplex either, it is just “one way” as they present it.

  12. 12. Peter says:

    Thanks, Vicente – interesting stuff anyway, wherever we put it. This sort of thing makes sense in the context of one of the theories that regard consciousness as an electromagnetic buzz – like those of JohnJoe McFadden or Susan Pockett; but personally I’m sceptical.

  13. 13. Philip says:

    Vicente: Re: comment 5
    In an integrated circuit, the transistor acts as a switch. Certainly a switch is not conscious nor could ever be, but is it conceivable that some arrangement of switches, out of all the infinite possible arrangements of switches, could be conscious?

    Replace the word “switch” with “cell”, and yes, as sure as the sun rises in the east, we know that some arrangements of unconscious cells are conscious. Now replace “cell” with “some type of cell model simulated on a machine”, a machine which is nothing but an arrangement of switches, like a computer. Can an assembly of these models ever be conscious? Isn’t this the question at the heart of AI?

  14. 14. Lloyd Rice says:

    Well said, Philip. I agree.

  15. 15. Vicente says:

    Well Philip “… we know that some arrangements of unconscious cells are conscious…”

    Now add: some arrangements of unconscious cells, that we haven’t got a clue how they work, are conscious… and I agree.

    It happens to be, that on the opposite, we perfectly know how the “arrangements” of transistors, logical gates, etc, work. And, there is no one single reason that makes me think that they are conscious.

    We are so far from simulating a cell, any cell… programmes like “neuron” are high level models of the cell behaviour, not a physical simulation of the cell. Just to simulate a simple passive ion channel is hell.

    I seems to me more sensible to understand how the brain works, and then try to create “artificial brains”

    Lloyd: I don’t want to get back to the: “can you explain how 2 is stored in the brain?” episode, please.

  16. 16. Vicente says:

    Philip, for me the heart question of AI is if a machine could ever fully replace a human. If a machine can fool you. If one day somebody could introduce you to a nice girl, you fall in love with her (she is so clever and tender and sensitive and emotional and knowledgable…) and then three years later you find out she is a robot. That is AI, to create conscious-like beings. Whether they are really conscious or not is something absolutely irrelevant.

  17. 17. Lloyd Rice says:

    Vicente: You set a very high bar for AI. Much higher than any Turing test I have heard of. That’s OK. You may certainly hold your view. May I describe mine without getting into the storage of “2″?

    I would argue that a system is legitimately AI if its intellectual and emotional makeup is more or less comparable to those of an average human. It will be very difficult to determine the real levels represented by that average, and my version of a successful AI system might well not pass a Turing test. But it will need at least intellect, emotions and foresight at more or less human levels and, yes, will claim to experience consciousness.

  18. 18. Vicente says:

    Lloyd, could be. The thing with the Turing test is that it does not consider a lot of very important elements of human language, i.e. metalanguage.

    Consider this blog, we can exchange ideas, but there is something lacking. In human communication there is a lot of information in face expression, voice entonation, body attitude. Believe me I have had trouble in mail exchanging, and in messenger chatting because of this, the same sentence can be taken very differently depending on the metalanguage. For me this should be parta of a the real Turing test.

    Regarding emotions, is quite different from pure intellectual transactions management, because emotions can be faked. Consider: something like:

    IF INFORMED OF “acquaintance deceased” THEN “cry” (as a very simple case)

    I think it will only be when we interact with future generations of advance androids that we will realise all the implications.

  19. 19. Vicente says:

    Philip: To simulate a system does not produce the real system output, it is just to simulate. To simulate a Nuclear Power Plant does not produce MegaWatts of energy, for the same reason to simulate the brain does not necessarily produce consciousness.

  20. 20. Vicente says:

    Sorry I meant MegaWatts of “power”.

  21. 21. Lloyd Rice says:

    Vicente: In the realm of computer software, the word “emulation” is typically used when one machine runs software to “emulate” a different type or class of machine. In that sense, the output of an emulation is closer to reality than the output of a simulation. Obviously, hardware cannot be “emulated” in a comparable way. But this does not mean it is not possible to build a machine which produces the desired “real” outputs. Of course, that machine may include software elements, as would, for example, the controller for a nuclear power plant.

  22. 22. Vicente says:

    Lloyd: I agree, I still remember running Synclair Spectrum games on a 486 PC, and Unix consoles running on Windows… But what you refer to is just a translator, you create a Virtual Machine that has the appropiate interface with the real machine. I don’t think this is what Philip was refering to.

    So, if the brain is hardware (SW side of the brain is not that clear, we could discuss long about it, although I believe we have resident programmes working), and one of its outputs is consciousness, the fact that you emulate or simulate the brain does not guarantee that consciousness would emerge from the simulation…

  23. 23. Lloyd Rice says:

    Doru: re your cmmt #7: I would disagree that AI has not addressed chemical effects. A number of AI projects have included various processes to evaluate global effects, such as emotions. It’s seldom clear just what all the variables are, what processes should be affected, and how that effect should be realized. But explorations are afoot. I would cite a number of projects at MIT and perhaps UMich. There’s no reason the effects cannot be coded as modifications to other processes, once the models have been proposed.

  24. 24. Lloyd Rice says:

    Vicente: In a way, the issue over the definition of simulation vs. emulation is not the real question. I must assume from your cmmt #22 that you are considering any computational realization that would be functionally complete (philosophical issues acknowledged). What seems to me to be at issue, and a question you have raised on several occasions would be whether a “complete” computational realization could actually experience consciousness.

    I am currently reading Metzinger’s “Ego Tunnel”. His view of the wide compass of the world model, including hopes, fears, dreams, etc., as well as the shape of the room and the sounds I hear, gave me a new point of view to consider. Of course, I have already previously stated that I think consciousness would simply “appear” once the appropriate machinery was in place. The Metzinger view only strengthened that belief, because it would account for more of the characteristics that would be encompassed in the phenomenal view.

    So, in my mind, the only question is exactly what sort of computational model is required; workspace theory, etc.

  25. 25. Vicente says:

    Lloyd: OK, then just one very simple question. Once all necessary elements are provided… WHERE exactly consciouness would simply “appear”.

  26. 26. Lloyd Rice says:

    Re #3 and #4: Again, I watched the Sharkey video. I agree, Peter, that he pretty much discounts artificial thought in many forms. He seems to distinguish “intelligence”, “mind” and “autonomy”, and never specifically mentions “consciousness”. Talking about autonomy, he agrees that it is possible for a car to steer down a roadway, but does not see how it could possibly drive from one location to another. In general, it will pretty much all be a matter of imitation and outward appearance. Not too different from (early) Searle.

  27. 27. Lloyd Rice says:

    Vicente: This is in reply to #25. I was inspired by Metzinger’s view (Ego Tunnel, this blog) that the “world model” must include not only the room I am in, the objects around me, and maybe my hands holding a book, my feet taking steps, perhaps a bit of pain from a recent cut, etc., but must include a representation of everything about me, my hopes, plans, fears, inhibitions, urges, everything. There must be models of all of these in addition to the actual emotional (physiological, neurological) responses that are the original sources of all of these issues. Once you have code to do all of that and turn it on, how could the program not be aware?

  28. 28. Vicente says:

    Peter: regarding cmmt #12, I think the noosphere stuff really finds its place in the dualistic theories section, like Popper-Eccles theories. The point is that one of the weak aspects, among others, of dualistic theories, with an unknown “agent” playing the role of mind, and acting on the brain, is that they entail a violation of the conservation of energy principle, i.e. nothing accounts for the work exerted on the brain, therefore you have no closure for the Universe, too bad. A way to get away with this inconsistency is to claim that the action just causes a bias in probability distribution. eg: in a long series of coin tossing you get 75% heads, so you have caused an effect but preserving physics laws, each time you toss the coin the conservation of energy law is satisfied, irrespective of the result.

    So if you can prove that the mind is acting on this random event generators causing a bias, you pave the way to say that “something” can act on the brain through the bias of random brain processes. This is what is hidden behing this experiment, it is more interesting of what I thought.

  29. 29. Vicente says:

    Lloyd: My question in #25 is WHERE is consciousness. Where is the place in space and time for consciousness. What are the coordinates for qualia? if the question makes sense. For example, if you are imagining a movie, can you decompose each of the images in your brain in “pixels” and assign to each pixel a coordinate in space and time.

    Once your system becomes conscious, WHERE in space will qualia appear?

    Suppose your “system” has a distributed architecture, with HW elements located miles away one from others, or they are close together, it doesn’t matter, WHERE does the “emerging” conscious experience take place?

  30. 30. Lloyd Rice says:

    Vicente: One of the latest references for the idea that consciousness is an illusion is Susan Blackmore (her own blog, a TED talk and several notes on this blog). But if you listen closely, she does not really say the whole thing is illusory. What she really says is that just certain aspects of consciousness are not quite what we normally perceive them to be. The primary one of these is the sense that consciousness flows continuously from moment to moment. This, she says, is an illusion. In fact, consc. is generated is “frames”, not unlike a movie. This fits nicely, of course, with the global workspace theories, saying that a new “frame” would be created at each workspace cycle, perhaps at a “frame rate” of 20 to 40 Hz.

    “Where” does this happen? Well, there are a couple of issues there. First is: where in the brain is the computation done. The workspace people have some fairly specific hypotheses. I am not completely up on the latest theories, but I would put some faith in some of the areas such as the cingulate, the insula, for example.

    But the other part of that is more interesting. If you wire up a switch to a light and throw the switch, where does the light get “turned on”? In the bulb? Yes, that is where the light comes from. The switch? Yes, that’s where the current flow started. The whole circuit, globally? I think that’s a better answer. The “turning on” is an emergent property of the circuit in operation. Of course, the example is far to simple. But it gives a clue as to how I think of the second part of the answer. When a program starts running, yuou can pin down a location for each of the physical aspects of what happens, but it’s harder to say where the emergent properties are.

  31. 31. Peter says:

    Vicente – yes, on reflection that makes sense.

  32. 32. Lloyd Rice says:

    Vicente: I think one of the difficult issues here is to bridge the gap from “emergent properties” to the first person view. How could an emergent property have such a weird property that the mechanism supporting that emergence would be aware of itself. Well, of course, the first and simpler part of the answer is that you need to think through the implications of having computational mechanisms for all of those special results that I mentioned in #24 and #27 (and a bunch of other stuff like that). Those are exactly the things that you need to have computed if you’re going to be able to sense yourself, to have a “self”.

    So what happens if you compute all those things? The computation becomes aware because it has built all of these “facts” about itself and the world and where it is and what its doing in the world.

    I know that answer will not be acceptable to a lot of philosophers. And some days I struggle with it. But for me, it’s getting closer every day to making sense.

  33. 33. Vicente says:

    Lloyd: I take the time part, with the frame rate figure, fine. For the spatial part, the answer is absolutely unsatisfactory to me. Well, I haven’t got any answer myself. I think it should be better to try to answer the question in the brain case, that we know it is related to a conscious experience for sure. Regarding S. Blackmore illusory idea, I will refer to the statement written on the top of this web page.

    To be honest Lloyd, I think I am becoming more and more dualistic. I am beginning to think that the brain is just an interface between the physical world and “the other side”. I don’t see how phenomenological experience could fit at all in the material world, unless it is a completely different new kind of matter/energy. Of course, all the questions remain open, how does the interface work? what is the acting and observing mind? etc etc…

  34. 34. Vicente says:

    Lloyd: “The “turning on” is an emergent property of the circuit in operation”

    “Turning on” is not a property. You could say that the circuit has to “states”, ON and OFF. Nothing is emerging.

  35. 35. Lloyd Rice says:

    Yes, the circuit is now in the ON state. Why is that not a property of the circuit? If somebody asks you what the circuit is doing, the answer must be “It’s making the light work.” Maybe you want to call that a “function” or something else, rather than a property. That’s OK. I have no objection. I’m not really that much into pinning down the nomenclature, anyway.

    I have to admit that, during the last few months, I’ve gone through a “dualist” period, maybe not unlike where you are now. But then something inside screamed at me. It said “No! No! No! IMPOSSIBLE!” It’s got to be explainable materialistically.

    So back to the drawing board.

    Referring to “the top of the page”, I assume you’re talking about “who’s being fooled?” I do not claim that there is no “self”. There is a model of me in my head right now constructing a composite of letters appearing as I type, thinking what to say, etc. I repeat that this model is in addition to the circuits actually interpreting the video, sending motors commands to the keys, etc. The model also includes me, my image of me, my opinions of that image, etc., etc. Is this an infinite loop? No. In spite of Doug Hofstadter’s amazement at what happens when such a loop runs, it need not be physically infinite.

    And need I say TO ALL, thanks so much for all criticisms of my thoughts. I vow to use all replies as best I can to improve my ways of thinking about this.

  36. 36. Lloyd Rice says:

    And, Peter, I SO MUCH appreciate your work in making this conversation possible.

  37. 37. Peter says:

    It’s a pleasure.

  38. 38. Lloyd Rice says:

    Imagine that you are a zombie. Put yourself in the place of a computer program performing some simple task. What you “know” at any moment is just what inputs are available to the program as it runs. I perhaps find this easier to do because I have been writing code for much of my life and when I am designing code, I tend to think in terms something like that.

    Now suppose you are a program that has routines to sense the environment you are in. “You” can also sense whatever you need to know about your own physical implementation to be able to take actions in the environment. Perhaps “you” are an automobile, a robot body, something like that. Your perception routines allow you to integrate successive views of the environment as needed to track events as they unfold around you.

    Consider, for example, Prof. Sharkey’s comments (ref “got to now” in Peter’s Lead in, also #3 and #26) about a car not being able to drive to a destination. He agrees that a computer could steer along a road. But then why not be able to do the rest, recognize signs, other cars in traffic, road conditions, etc. and respond accordingly? None of that is easy, but all these tasks are “just” applications of pattern recognition, goal management, etc.

    So far, none of this would seem to require awareness as we tend to think of it, as philosophers have portrayed it. As a chunk of software — computer code — you sense all these things and have the ability to use that information to act.

    But now, extend the scenario to ever more elaborate implementations. Keep in mind all of the “special” capabilities that have been added to your repertoire over the years. I refer to the lists of sensing capabilities outlined in #24 and #27. You have hopes, plans, fears, etc. The modeling system organizes all of this information such that it becomes available to your goal management functions. Memories of past actions are similiarly available.

    Now I ask just what it means to sense all these things? I claim that the sensing of all these things is just awareness. I claim that you are not a zombie after all. That once you had the perceptual capabilites as described, by sensing these things, you were by that very action, “aware” of them. And I do mean “aware” in exactly the same sense that the philosophers talk about consciousness.

    You may say that I’m taking a huge leap in that last paragraph. I say, No. If there was a leap, it was earlier. It was that you didn’t really follow me through the steps in the paragraph before that.

    There was really just one claim in the Metzinger book, stated in his introduction, that changed my thinking about these things. That claim was the idea of the elaborateness, the completeness, of the internal model. It is well known that models of your physical surroundings exist in the hippocampus. Models of where moving objects are in the space around you exist in the parietal areas, near the motor areas. But Metzinger claims that these are just the beginning. That, in fact, everything around you and about you is modeled. And that processing these models is exactly what it means to be aware of what is modeled.

    I “see” objects because in my brain those objects are modeled. I can distinguish different colors because years ago, when I first saw those colors, the visual model processors constructed and memorized distinctive patterns (of neural firings) that I could recall later to recognize those colors. The constructed models are what I am aware of; what the philosophers call qualia.

    Pain is an interesting case. In fact, it is probably pain more than any other percept that led me to a new line of thought, going beyond what is said, either in the global workspace theory, or in “Ego Tunnel”. My new idea is that each perceptual area can add new elements to the perceptual input in order to construct the most useful model. I have no problem with the idea of software synthesis. We have devices that synthesize all kinds of things, music, speech, video images, on and on. In fact, the new elements which need to be synthesized in order that we become aware of the modeled percepts are generally pretty simple, elementary “signals” that help fill out the form of the consciously perceived inputs.

    To get into the discussions of each perceptual area and how each type of percept is augmented by synthesized components will take much space, probably more space than most readers here would want to follow.

    I could probably put together a book of the thoughts I have had on this topic just during the last 24 hours. In fact, much of it will probably appear here if the conversations here lead in those directions.

  39. 39. Vicente says:

    Lloyd:

    ” I “see” objects because in my brain those objects are modeled. I can distinguish different colors because years ago, when I first saw those colors, the visual model processors constructed and memorized distinctive patterns (of neural firings) that I could recall later to recognize those colors. The constructed models are what I am aware of; what the philosophers call qualia”

    Not at all, in my view. You are making this come out the blue. If anything, the constructed models would be related to the interpretation or understanding of qualia, which at the same time help to construct a model.

    That’s the thing, despite the neural firing patterns are related to the visual experience, they have nothing to do with it in nature.

    All the sensing you assign to the program does not exist, unless your want to call “senses” to “detectors”, they are not the same, for all the experience entailed behind. And the models are only in the programmers mind.

    But it is funny what I am thinking. Consider for a moment that dualistic models could be (just for a moment). So the brain is a system that acts as an interface (I/F) with some additional autonomous modules (the ones that blow free will+instincts, etc), between the “mind” and the material world.

    Now consider the Upanishads view, in which each brain/person is a window through which light of the unique universal consciousness passes, both ways. Both models are compatible.

    So, I was thinking that, what you can construct is a AI system, that is tunned to get connected to a mind. It does not produce consciousness, but it allows consciousness to get connected to it. Like a brain. Unless as it was previously discussed it all requires of some special properties of the biological nervous tissue.

    You would open artifically one of the Upanishad’s windows…

    Do you think that what I am saying is absolute nonsense?

  40. 40. Lloyd Rice says:

    Vicente: “The models are only in the programmer’s mind”.

    No. They are data structures in the running software system. The job of the running program is to implement the model, which means to compute relationships among the parts of the model and construct as outputs whatever of those relationships the programmer intended.

    In the case of the GWT, each module would construct as output the model of the portion of the sensed world it is programmed to deal with. So a module designed to recognize a particular object would output a description of the object that is currently present in its inputs. It is the job of the workspace to combine all of the current module outputs into a unified world view of the moment. When that happens, the entity is aware of that world view.

    I’m sorry, but I just cannot get into the dualistic view of it. For me, I see no indications whatsoever that there are any external mechanisms capable of supporting any such computations. Or, in less mechanical terms, I see no evidence of any “powers” in the universe outside of brains capable of such things.

    Since I wrote #38, I have realized that I have many, many problems with understanding how the various software pieces of what I described might actually work. But, so far, I am not backing out of the idea.

    The core of what I am saying is that because the software is designed precisely to construct a view of the world, when it runs, the entity “sees” that world. I still think that can work.

  41. 41. Lloyd Rice says:

    In the 3rd paragraph of #40, instead of “outside of brains”, I should have said “other than computational mechanisms, such as brains”. Freudian slip? Maybe. I will never in my lifetime master everything that goes on in this computer on my shoulders.

  42. 42. Vicente says:

    Lloyd: I see your point, but for me the main problem in your reasoning is that you have entitled software with high attributes that I don’t think it has. When you say “SW is design to construct a view of the world” you are transferring to the machine a conceptual instance that only happens in the programmers mind. The letters you see on your screen are only in your mind. On the screen there are only different colours and brightness pixels; colours are only in your mind too.

    The other point is that you conceive brains as computers working, this is still to be proven.

    Of course, I am aware of all the pitfalls of dualistic models, it is simply that at this point I wouldn’t discard dualistic approaches as happily as I used to.

  43. 43. Vicente says:

    Lloyd, as I see it: pain, hunger, thirst, the crying of a baby… all aspects related to survival are “design” to force you to react, by making any alternative less important than solving the current situation. Once most physiological processes are not conscious, what mechanism does the body have to tell “you”, something goes wrong, only pain and upset… and the intensity of the feeling usually depends on the severity of the condition.

    Do you really see a computer feeling nausea because there is a peak in the power supply?

  44. 44. Lloyd Rice says:

    Vicente: :Construct a view of the world”??

    I completely agree that letters and colors are in my mind and are not on the video display. But I would agrue that they are just as much in the software data structures as they are in my mind. That is precisely the task of the video analysis routines. Pixels from the video image, as collected by a camera or whatever, are subjected to edge analysis, figure recognition, etc. The result is a data representation of letters, colors, etc, very much as is done in my occipital lobe. These results are passed to the object representation layers which presumably are a part of the GWT module repertoire.

    When I talk about brains as computers, I obviously am not saying they are “programmable” like this one I am now typing on. But I believe you said you have worked with FPGAs, so you know well that hardware can be programmed.

    All of these “emotional” conditions you list could be among the possible danger warnings that a computational entity would need to be warned about. So it seems reasonable that some equivalent of our amygdala might be included in the software design. In any case, whatever the list of alarm conditions, there will certainly need to be some echo of our pain mechanism in order to raise alarm levels and thus modify the behavior in necessary ways.

    If something like nausea is the best way to signal that a power supply problem is occurring, then so be it.

  45. 45. Vicente says:

    Lloyd, it was Doru who had a lot of experience with FPGAs.

    You are right, actually SW is a fast way for HW reconfiguration on the fly. It is only HW that exists at the end. A multiple purpose processor, could be somehow seen as a huge FPGA that is continuously changing and readapting its configuration(programming) according to the programme instructions and data.

    “…I would agrue that they are just as much in the software data structures as they are in my mind…”

    I think this is not true for two main reasons: first, the software data structures are only in your mind too, in the computer there are only matter and energy distributions, that YOU identify and can call data structures if you want to; second and most important, your mind includes qualia, the data structures don’t that we know.

    If there is problem with the power source, you can signal it by a red alarm light, or the word nause blinking on the screen, but that has nothing to do with the inner, subjective, phenomenological feeling that humans experiment when they “suffer” nausea. And the signaling would be to inform YOU, the one that “cares” about the system, or would the system be equipped with a self-care survival instinct. If the system detects that someone is going to pull the plug off would it “panic”. You could programme some protection, or self-recovery mechanisms/routines.

    Imagine that you had to describe nausea to someone who has never felt it, almost impossible, as it happens with all qualia.

    So, I think you could probably design and build a zombie, but not more than that. Unless your zombie is equipped with a “mind transponder”…

  46. 46. Lloyd Rice says:

    Vicente: Just where are data structures, really? If they were only in the programmer’s mind, a computer could never do anything. What would it mean if a face recognition program identified an escaped criminal at an airport, resulting in a gate automatically locking, which resulted in the person being sent to prison. The only thing humans did was the final transport from the airport retaining cell to prison (plus, of course the judges decision, etc.). If you stick with your point of view on this, I will go back to the auto engine control unit. Computers fly airplanes, run medical equipment, on and on. They could do none of that if the data structures in their memory banks had no meaning.

    As for qualia, I do have more to say, and I will get back to you on that.

  47. 47. Lloyd Rice says:

    More on the point of #46: In fact, this is the same answer as I gave above about the “data structures” on the video display. I agree that screen has no letters or colors, but only pixels sending out volleys of photons. But those photons can be captured by a camera, which can perform any of the recognition tasks I mentioned, feature detectors, edge detectors, object recognizers, etc. The proper software can then use that information to perform data-dependent actions. Do such data patterns seem empty to you? Do they not result in real-world actions?

  48. 48. Vicente says:

    Lloyd: What I am trying to convey, is that you are superimposing a intrinsic conceptual or abstract layer to an object, that does not have it “per se”. It is only in the observer that concepts as data structures exist, in this case the observer is also the creator. That is the meaning of world construct, to superimpose concepts, ideas models to nature. I am not saying that data patterns are empty to me, I am saying that data patterns only exist in me, an make sense to me. I admit that those data patterns have a material substrate, it can be silicon and charge in a RAM memory chip, or burned aluminium in dvd, or symbols written with pencil on a sheet of paper, but the concept of it is only in humans mind.

    The computer does not perform a recognition, it performs an “action” (I would better say it supports a process)that YOU INTERPRETE as a recognition, for convenience and language economy we say that the computer has performed a recognition.

  49. 49. Lloyd Rice says:

    Vicente: I want to think this through and give you a more careful reply. But here is an immediate response. I am using a neural net package to locate patents of interest to the Acoustical Society. It is exactly what anybody would call a classifier. After the NN run, I need to read through the abstracts of about 800 patents each month to make the final decisions. Without the NN, I would need to read about 13000 abstracts each month. For me, this is a real difference that I use every month, not something I interpret in the abstract.

  50. 50. Lloyd Rice says:

    Vicente: I must admit that I really do not understand your comments in the first paragr. of #48. I have been responding all along to the idea, as I understood you, that data structures cannot have consequences. But I cannot believe that is what you are saying.

    I do agree that a pattern written on paper cannot, of itself, cause actions. But if you have any sort of reader, photocell, etc., not just my eyeballs, and detect that written pattern, then it can be the cause any other action.

  51. 51. Vicente says:

    Lloyd: There is definitely a miscommunication between us. I really don’t find the wording to get my ideas across.

    Of course data structures have consequences, or contribute to effects/results, but that has nothing to do with the discussion. The Haiti earthquake has definitely had consequences, but you don’t assign to earthquakes the properties you claim for data structures and SW. Everything is a continuous flow of causes and effects (consequences).

    The NN is just a tool to help you do a task faster. But the NN is not aware wether it is filtering patents or newspaper highlights. It is you that can check the results.

    If you use a sieve to sift cereal you are doing the same, but using a mechanical method. I am sure you don’t think the sieve is conscious or developes world constructs. But if you were using an artificial vision system connect to a robot to sift and select the grain then you completely change your view of the process.

  52. 52. Vicente says:

    Lloyd, another idea you could consider. Once you raise the issue of consequences, you have to consider the related issue of responsibility.

    If the results or consequences of the SW + datas structures are bad, who is to blame? who is responsible? or if they are good who would you congratulate? The system? No, the creator of the system. If a disaster is caused, who is liable? the manufacturer, the retailer… anybody but the system.

    when talking about responsibility and blame, I am on purpose ignoring or disregarding the free will issue, which for ethics is very important, but different topic in this case.

  53. 53. Lloyd Rice says:

    OK, Vicente: Perhaps we got off on a side track. In #42, you said (paraphrasing slightly) “the letters are not on the screen”. I took this, not as a discussion of the level at which the data patterns are to be interpreted, but whether any interpretation at all could be done by any mechanism other than the brain.

    I still believe we have some terminology issues to clear up there before we can get back to the real question; that is — what are qualia.

    We need to be very clear about what the “model” is, what “data” it includes and how that data is to be understood with respect to other parts of the conscious mechanism. Only then can be begin to calmly discuss what it means to be conscious.

    I do have some specific ideas about how consciousness arises in such a mechanism, but they are somewhat controversial and will require a clear mutual understanding of the contributing pieces.

  54. 54. Lloyd Rice says:

    I do appreciate that you raise the issues of responsibility and free will and, in the same breath, agree to not get into those questions here.

  55. 55. Lloyd Rice says:

    It is my view that one could begin with a mechanism which has no consciousness and incrementally add functionality. If one were to add certain mechanisms, at some point, the entity would acquire consciousness. I would be very interested in pursuing such a discussion with Vicente or anyone else, the details of such mechanisms. I admit that I do not know all the details involved in the crucial parts, but I do have some ideas. As has been noted here, my discussions do tend to be a bit technical. Any feedback on this would be welcome at any point.

  56. 56. Vicente says:

    Lloyd: it looks very interesting. My view is that at this moment the only piece of matter that we know it is related to a conscious experience is a brain. Don’t you think that it is sensible to first find out what is there in the brain that is responsible for consciousness, and then see if it can be replicated elsewhere.

  57. 57. Vicente says:

    Thinking of brains: take the case of life, it is a more prosaic subject that consciousness, it is easier to be defined, described, measured, analysed, etc. Still, nobody has been able to create life in a lab (that I know), not even a simple virus or a prion, I mean from scratch not mutations of already existing ones. And the origin of life remains a nice conundrum…

    I admire the goal, but I suggest that being a bit humble and setting intermediate milestones could help.

  58. 58. Lloyd Rice says:

    Vicente: No, I disagree about starting with the brain. I believe that the main problem with understanding the brain is the immense difficulty of observing the pieces in action. But the problem with understanding consciousness is in knowing how the pieces work together. When we understand the pieces, I think it is quite plausible that we can understand how large collections of those pieces work together.

    It is much the same problem with life. We believe we understand the processes involved, but we do not know how all of the natural pieces work. Why not start with pieces that we do understand.

    As a starting point, I would propose to discuss the navigating car of Dr. Sharkey’s talk (and #3, #26, #38). We will all agree it is a zombie (not conscious), but it has many interesting pieces to be discussed and, I will eventually argue, it is not all that far from having many of the pieces necessary for consciousness. I would begin by assuming some familiarity with the DARPA Urban Grand Challenge (2007 or 2008?).

    My proposal would be to discuss the various pieces until we understand (at least in theory) what those pieces can and cannot do and what additional pieces are needed to do more.

    I think this blog page is so interesting exactly because it deals with AI rather than brains.

  59. 59. Shandra says:

    Thank you for the great story.

  60. 60. Vicente says:

    OK Lloyd: Another point is that this approach is also a mean to understand the brain, cross-fertilizing scheme.

    One thing I don’t have clear. The autonomous car, can rely on external navigation systems like GPS, or road side radio signals/beacons, or only built on autonomous navigation/Positioning systems are allowed, ie: inertial navigation.

    Car manufacturers experience with artificial vision is very discouraging, metereological conditions or a simple tree branch in front of signal or a traffic light spoils the whole thing.

  61. 61. Lloyd Rice says:

    Vicente: I absolutely agree that if we do figure out some interesting stuff, it could very well also apply to the brain.

    As for the details of the car’s capabilities, it is not really an issue for me exactly what powers are available. Each such sensor function will simply have its own pathway to the global integration. Bats have sonar, snakes have IR, etc. It does not matter.

    And I agree that the real-life problems are huge. I believe we could keep the conversation on a theoretical level and ignore most of the real problems.

    Are you familiar with the video coverage of the Urban Challenge, maybe it was on Discovery Channel?

  62. 62. Lloyd Rice says:

    I just checked through my DVDs and the only thing I saved from the Science Channel coverage of the 2007 DARPA competition was a 44 minute segment of the final results. It had a lot of cheers and groans and frantic cell calls, but not much of the tech details I remembered from other segments. So I’ll wing it.

    What I do recall is that all of the vehicles had a fairly elaborate system for building an internal map of the surroundings. For one car, it was stated that the map got updated 10 times per sec. The video concentrated on the immense difficulty of getting such a system to work right. But they did not have 300 million years of evolution working for them. The task is to find a representation that can be built up by contributions from all kinds of sensors and is also useful to the next step, which is a strategy planner. Getting that software to work as intended was at least as big a job as getting the sensors to work. But, as I said, those are “just” tech details. We can move on.

    They used a variety of strategy planning systems. I am most familiar with the Carnegie Mellon ACT package. But whatever the system, it seems well within reason that such a system would eventually be capable of driving the car, following the course, making decisions about when to go and when to stop, and so on.

    For me, an interesting aspect of the road map was the extent that a vehicle also had to include some knowledge of itself, where the fenders are, how much it could accelerate, etc. There were many problems of integrating that information with the road map, but again, those are details.

  63. 63. Lloyd Rice says:

    Dana Ballard has argued (“Intro to Natural Computation”, 1997) that most visual object coordinate computations are simplified if the all coordinates are converted to a space with the origin at the visual focal point. For stereo vision, this would presumably be a point on the mid-saggital plane halfway between the lenses. For a car with cameras on the roof, on the bumpers, lidar somewhere, various IR sensors, etc., it is not so clear what this means, but presumably it would still be of some advantage to convert everything to a coordinate system centered somewhere within the vehicle.

    In the case of the Urban Ch. entries, I have seen no evidence of this. Most seem to have used a map view, ie. an eagle’s eye view. Perhaps they are missing out on some simplifications. In any case, it seems obvious that a common coordinate system would be a great advantage.

    As for the fate of the Urban Ch. cars, I just heard on this morning’s news that Stanford is planning to have their entry drive to the top of Pike’s Peak and back.

  64. 64. Lloyd Rice says:

    With regard to how objects are represented, the fact that you can imagine rotating an object in space seems to make it clear that some sort of abstract coordinate system is available.

    I’m sure that Maar, Koch, and many others have made this point.

  65. 65. Dylan Ford says:

    Could quantum mechanics/physics (whichever) be used to create a mind in a computer since it’s the system of infinite possibilities?

  66. 66. Billige Notebooks says:

    Billige Laptops…

    [...]Conscious Entities » Blog Archive » AI Resurgent[...]…

Leave a Reply