Picture: plane. I see via MLU that Robert Sloan at the University of Illinois at Chicago has been given half a million dollars for a three year project on common sense. Alas, the press release gives few details, but Sloan describes the goal of common sense as “the Holy Grail of artificial intelligence research”.

I think he’s right. There is a fundamental problem here that shows itself in several different forms. One is understanding: computers don’t really understand anything, and since translation, for example, requires understanding, they’ve never been very good at it. They can swap a French word for an English word, but without some understanding of what the original sentence was conveying, this mechanical substitution doesn’t work very well. Another outcrop of the same issue is the frame problem: computer programs need explicit data about their surroundings, but updating this data proves to be an unmanageably large problem, because the implications of every new piece of data are potentially infinite. Every time something changes, the program has to check the implications for every other piece of data it is holding; it needs to check the ones that are the same just as much as those that have changed, and the task rapidly mushrooms out of control. Somehow, humans get round this: they seem to be able to pick out the relevant items from a huge background of facts immediately, without having to run through everything.

In formulating the frame problem originally back in the 1960s,  John McCarthy speculated that the solution might lie in non-monotonic logics; that is, systems that don’t require everything to be simply true or false, as old-fashioned logical calculus does.  Systems based on rigid propositional/predicate calculus needed to check everything in their database every time something changed in order to ensure there were no contradictions, since a contradiction is fatal in these formalisations. On the whole, McCarthy’s prediction has been borne out in that research since then has tended towards the use of Bayesian methods, which can tolerate contradictions and which can give propositions degrees of belief rather than simply holding them true or false. As well as providing practical solutions to frame problem  issues, this seems intuitively much more like the way a human mind works.

Sloan, as I understand it, is very much in this tradition; his earlier published work deals with sophisticated techniques for the manipulation of Horn knowledge bases. I’m afraid I frankly have only a vague idea of what that means, but I imagine it is a pretty good clue to the direction of the new project. Interestingly, the press release suggests the team will be looking at CYC and other long-established projects. These older projects tended to focus on the accumulation of a gigantic database of background knowledge about the world, in the possibly naive belief that once you had enough background information, the thing would start to work. I suppose the combination of unbelievably large databases of common sense knowledge with sophisticated techniques for manipulating and updating knowledge might just be exciting. If you were a cyberpunk fan and  unreasonably optimistic, you might think that something like the meeting of Neuromancer and Wintermute was quietly happening.

Let’s not get over-excited, though, because of course the whole thing is completely wrong. We may be getting really good at manipulating knowledge bases, but that isn’t what the human brain does at all. Or does it? Well,  on the one hand, manipulating knowledge bases is all we’ve got: it may not work all that well, but for the time being it’s pretty much the only game in town – and it’s getting better. On the other hand, intuitively it just doesn’t seem likely that that’s what brains do: it’s more as if they used some entirely unknown technique of inference which we just haven’t grasped yet. Horn knowledge bases may be good, but really are they any more like natural brain functions than Aristotelian syllogisms?

Maybe, maybe not: perhaps it doesn’t matter. I mentioned the comparable issue of translation. Nobody supposes we are anywhere near doing translation by computation in the way the human brain does it, yet the available programs are getting noticeably better. There will always be some level of error in computer translation, but there is no theoretical limit to how far it can be reduced, and at some point it ceases to matter: after all, even human translators get things wrong.

What if the same were true for knowledge management? We could have AI that worked to all intents and purposes as well as the human brain, yet worked in a completely different way. There has long been a school of thought that says this doesn’t matter: we never learnt to fly the way birds do, but we learnt how to fly. Maybe the only way to artificial consciousness in the end will be the cognitive equivalent of a plane. Is that so bad?

If the half-million dollars is well spent, we could be a little closer to finding out…


  1. 1. Paul Bello says:

    oh bother.

    Please don’t drink the Bayesian kool-aid. I wouldn’t go so far as to say that Bayesian networks handle contradictions. All they do is combine evidence for an observed event into a posterior probability. Probabilities and truth conditions (i.e. contradiction) have nothing to do with each other. Also, Bayesian networks don’t provide any computationally feasible solution to the frame problem. Frame axioms deal with change, and change implies dealing with time. There are so-called Dynamic Bayesian Networks, that require you to take a static network structure and replicate them via an “unfolding” process for every timestep involved/observed, but clearly this is massive on even a liberal estimate of the number of fluents (eg. relations/predicates that can take on different values at different times) involved in the human conceptual repertoire. Since each of these fluents is most probably involved in hundreds, if not thousands of bits of commonsense knowledge, you can imagine my lack of optimism. Worse than this, Bayesian networks are purely propositional, and incapable of considering new/inferred propositions mid-computation. Far from cognitively plausible.

  2. 2. Lloyd Rice says:

    “Entirely unknown technique” ?? I don’t think the situation is quite so severe. I believe the reason that translation is getting better is that we are learning more and more about what really needs to be done — and learning better ways to do it. Of course, the hardware is not ideal for the task. The brain is superb at making connections on many hierarchical levels to a wide variety of possibly related objects, events, and conditions. Computers are not good at that, but I think we have a fair idea of what the task is. To deal with the frame problem, you don’t need to recall everything, just most of what is most closely relevant.

  3. 3. Peter says:

    Fair points.

    I wouldn’t go so far as to say that Bayesian networks handle contradictions.

    No, but surely at least they cope better than old-fashioned systems based on pure propositional/predicate calculus? I try to keep an open mind – but the kool-aid is still pretty far from my lips, honest!

    the reason that translation is getting better is that we are learning more and more about what really needs to be done

    Well, maybe: but it doesn’t look like that to me – it looks as if brute force solutions are getting better (and admittedly a degree more sophisticated, but still fairly brutish). If I understand it correctly (far from certain) Google’s fairly successful translation facilities, for example, are based simply on matching massive quantities of parallel text.

  4. 4. Paul Bello says:

    The issue is that Bayes Nets aren’t as representationally expressive as old-fashioned predicate calculus systems (not that these are some sort of panacea), and being able to represent serious knowledge (e.g. as found in natural language utterances) requires formalisms even more expressive than FOPC.. Recent developments in AI seek to combine some of the expressivity of the latter, with the ability to deal with uncertainty of the former. Interesting new stuff to keep your eyes open for.

    Re: google translation– spot on, Peter. Just massive statistics.

  5. 5. Lloyd Rice says:

    But, Peter, what does the brain do other than “matching massive quantities of” the relevant information? It’s good at that, our computers are not. But the process is similar. I’m not saying that we know all about all of the needed algorithms, but, as I see it, that lack is not the major stumbling block. Whether the details of Bayes techniques or FOPC or other math are exactly right or not is not a major issue. I don’t believe that evolution has paid much heed to such details. I do not denigrate the efforts to understand the details of such systems, but claims of their appropriateness to solve or not solve the problems of perception and understanding miss the mark.

  6. 6. Paul Bello says:

    From a cognitive modeling/AI perspective, perception and understanding are not so much problems to be solved as they are data to be fit. I don’t know what it means to “solve” perception. Since it *is* computers that we are talking about here, and not evolved organisms, all we really can do is talk about the breadth of data accounted for by particular representational and algorithmic choices. Of course, there are plenty of other considerations, such as parsimony, but all else equal, we’re having a discussion on how the mind represents and reasons about both things out in the world, and things inside its “head” (e.g. mental states). All that matters are the details. I’m not so optimistic as you about the math just “taking care of itself” in time. I think by studying behavior, we get a sense for what the math is able and not able to capture under an appropriate set of constraints (ie. generating appropriate cognitions and behaviors on roughly human-scale time, etc.)

    With regards to matching massive data — you have an interesting adjective in there: “relevant.” Computers are awfully good at matching, and awfully bad at computing relevance. In natural language understanding, normally what’s relevant has something to do with the beliefs, desires and intentions of the speaker/author of the language. Developing intelligent systems capable of deeply considering these remains a difficult challenge.

  7. 7. Lloyd Rice says:

    Paul: I agree.

    To me, it seems that relevance is mostly a question of finding common links across a wide variety of realms. You have made a good start at listing the important ones. In the end, it does seem to me like a pattern matching process, only applied to a wider variety of data than our artificial systems typically handle. That is why I say the process seems similar.

    Perhaps I don’t make quite the same distinctions you do between living and nonliving computational things.

  8. 8. Doru says:

    As a Penrose/Hameroff believer, I do think that speculating on deriving AI from computational performance (quantum or not) is irrelevant.
    Discrete coherent states encoded in the dendritic microtubules, do not enable the brain for computing power, but rather gives the ability to model complex space-time reality that matches pretty good the complexity of the space-time reality outside of itself.
    Computers will be always sequential in nature and never be able to run without a clock (without timing). They definitely enhance our intelligence and computational power, but they will never have a meaning and a purpose outside of its programmer.
    One of my observations that computationalism is fairly useless for AI is that computers are using read/write (RAM) memories to simulate any predictions, which means that storage is used temporarily for algorithmic sequences and calculations. I just couldn’t find anything in brain studies that suggest such operation. Brains cannot change the content of its memories. It’s all read only that is written only once like in a time-lined sequence (movie). It’s all just learning and pre-wired decisions encoded in masivelly parallel logic. 100,000,000,000,000 (10,000 per neuron) combinatorial encodings encapsulated in few ounces of grey matter.

  9. 9. Paul Bello says:

    It’s a presumption to think that computational approaches to *modeling* mental function needs anything at all to do with neurons, brains, or neural structure. Clearly, nothing in a computer is reminiscent of a biological neuron, on the other hand, we’ve already built computer programs that simulate human performance (including when and how they make errors) on a wide variety of tasks.

    As a half-baked armchair philosopher, I’m interested in how the mind actually works, neurons and all — as a computer/cognitive scientist, I’m interested in whether function can be reproduced, regardless of whether its done with a neurally-inspired architecture or not.

  10. 10. John says:

    “Somehow, humans get round this: they seem to be able to pick out the relevant items from a huge background of facts immediately, without having to run through everything.”

    Hmm, looks like a Cartesian Theatre is being used by human beings. Have you looked at anything lately? Our experience is a whole geometrical form available simultaneously. (See Simultaneity – the key to understanding mind?).

    However, even though digital computers cannot contain a Cartesian Theatre I can see no reason why computers should not be able to simulate our mental modelling of events to give them some degree of common sense. After all, it is only necessary to model the size and associations of a letter to predict that it goes through a letter box and not through the key hole.

    The Wikibook on consciousness has an interesting section on this problem.

  11. 11. Christophe Menant says:

    True that we have learned to fly without having to do it the way birds do, but we did for that get clear about what flying is (lifting force from air pressure on a wing).
    Things are quite different for common sense or understanding. They are about information processing performances in animals and humans for their survival and well being. The problem here is that we are not clear at all about what are life or consciousness. We do not understand them. They are the result of billions years of evolution. This puts us in a delicate postion when looking at what is for them comon sense and understanding.
    There is a logical problem when looking at the performance of understanding for organisms that we do not understand. Perhaps we should spend more time (and money) on the nature of life and consciousness when looking at transferring their performancres to computers.
    This should not stop us of course from investigating ALife or trying to integrate the notion of meaning in data bases, which are promizing areas of investigation (and perhaps it can bring some uderstanding about life and consciousness). But we should remain lucid on our lack of understanding about what we are trying to simulate.

  12. 12. Vicente says:

    Yes, that is the point, if we mention to create an “artificial lake” nobody has a problem, but if we talk about creating “artificial life”…that is different. Lakes and living beings are both the result of “natural” evolution processes (in theory). Nevertheless, when we confront artificial to natural concepts, it is not the same to talk about lakes or minerals, than about life, and the cause is not just the technical difficulty to produce biological stuff in a lab. If we know tackle consciousness we have an additional layer of complexity, because there is not even a “system” to artificially replicate or copy, maybe, just the effects of such a process could be copied, at this point we get into de problem of qualia, zombies and so on. So let’s put our efforts to try to understand what is the mind, at least as much as to create systems that behave like human beings with “common sense” .

    By the way, does anybody have a definition of “common sense” ? I have found:

    – sound and prudent judgment based on a simple perception of the situation or facts.
    – Sound judgment not based on specialized knowledge; native good judgment. [Translation of Latin s?nsus comm?nis , common feelings of.

    Are these definitions good enough for the purpose of the study? I believe not.

  13. 13. Paul Bello says:

    One potential qualification of this discussion thread ought to be made salient: nobody has really given a *reason* for wanting to do this kind of research. If we are working on a unified theory of intelligence to support the construction of autonomous systems (like unmanned vehicles, etc.), then issues with qualia and phenomenal consciousness may not need to be addressed. On the other hand, if we want to pursue a unified theory of intelligence to build a computational model of a medical patient to be used by neurologists-in-training, perhaps we need to think more deeply about how to incorporate notions of the subjective into the theory. Really, the problem needs to be scoped by our demand for applications.

  14. 14. Christophe Menant says:

    The problem needs indeed to be scoped. Perhaps the current domain of existence of “common sense” can be a starting point, rather than the planned applications.
    Also, the horizon is wider as other items could be entitled “holy grail of AI research”: autonomy, cognition, consciousness, aboutness, awareness, interpretation, meaning, intentionality,…. These performances are not independent and address complex philosohical questions. Most of them are applied to basic life as well as to humans.
    A possible path to understand these performances could be to consider their existence in organisms first, and then in humans. An evolutionary approach. First look at what these items correspond to for non human life, and then (and then only) look at what human being brings in addition or modifies. The sequence is important. Autonomy or common sense do not mean the same thing when applied to animals or to humans capable of free will. And the latter is to rely on the former. So an understandable path for a logical approach could be to begin with basic life and then go thru evolution up to humans. ALife first, and then AI.
    Such approach has not been favored by xxth century dominant philosophies which were rather considering Darwinian evolution theory as irrelevant (http://www.amazon.fr/Philosophy-Darwinian-Legacy-Suzanne-Cunningham/dp/1878822616). But the xxth century is over…..

  15. 15. Lloyd Rice says:

    Doru: Do you write mostly to be intellectually challenged, or mostly to promulgate your views?

  16. 16. Doru says:

    Hi Lloyd,
    I recently discovered the benefits of both: challenging your intellect and sharing your own view on interesting subjects like the ones on this site. I wish there was more of it!

  17. 17. Lloyd Rice says:

    Doru: OK. Thanks. My question is this. You say “Computers will be always sequential in nature and never be able to run without a clock”. That seems a rather narrow view of what a “computer” is. I agree that this limitation would apply to the typical Von Neuman architecture by which most computers today are built, but it certainly would not apply to “computers” in general. Do you believe it is possible to build a computing machine that works on principles similar to the brain? I agree that the word “similar” makes my question into a bit of a trap, but let’s start there if we could.

  18. 18. Lloyd Rice says:

    Doru (continued): I guess there are two main issues. The structure of memory and the quantum requirement. Perhaps the memory issue defaults to the quantum issue? What is your argument that the brain’s computing power depends on quantum mechanics in any way that could never be duplicated by a man-made device? (Note that almost all electronics is dependent in one way or another upon quantum mechanics, just because that seems to be the way the universe works. That doesn’t stop us from building things that work.)

  19. 19. Doru says:

    Hi Lloyd,
    Ok, this may seem a little confusing, but I am sure it will evolve into a clearer opinion. My computer engineering observation is that, all computers are based on a Von Neuman architecture. They all run a program counter that fetches instructions and data from the volatile memory. The logic is rather small and very generic in the Arithmetic Logic Unit and some more specific in the instruction decoding and execution unit.
    The most fundamental structure in digital circuits is the Finite State Machine which I argue that is not a computer. It is basically a sequencer that has only two states, the current state, and the next state.
    A state machine always determines what will be the next state based on the current state, inputs, and the logic.
    It’s the structure that I found somewhat similar with that of e neuron. I actually have been playing a little implementing neuronal networks using programmable circuits (FPGA’s and CPLD’s).
    So yes, it is possible but with the current technologies, I only can design the functionality and complexity of something similar with a paramecia that doesn’t have a brain, but is able to find food, eat, find a mate, reproduce etc.
    To come back to your question, I don’t think that brains are computing and factoring any large numbers (let’s say, more than 10). We don’t need to do that to function properly. Math is more like a language that we learn to be able to communicate common sense about our observations of the outside world.
    My understanding from Hameroff papers, is that our intelligence is derived from the cognitive processes not from computation. We are the owners of a vast network fabric of very small tubes that encapsulate patterns of molecules that exhibit quantum coherence and superposition at room temperature. The signals from outside world propagate through these microtubules as dielectrically polarized waves.
    It’s like these waveforms embed some unique signatures from the outside world that will make them coherent with certain quantum patterns that will make the wave collapse resulting in a moment of consciousness.
    The problem of technology to reproduce this kind of behavior is that it’s impossible to interface with quantum states. The state machines we can build cannot colapse their states. We cannot detect those signals without destroying that embedded signature they carry.
    In other words:
    We will never be able to tap into the fundamental process of our consciousness. They didn’t call it “the hard problem” for nothing!

  20. 20. Lloyd Rice says:

    Doru: I’m with you down to about the 5th or 6th paragraph. What is a cognitive process if not a computation? I certainly agree that memories are not stored in the brain in the form of binary bits, but very few programs actual use results in bits, even though bits are used at a lower level. For what the program is doing, it doesn’t matter whether it’s bits or quads or decimal chunks, etc. Personally, I believe brain memories are stored in the form of synapse presence, synapse strength, local neural-transmitter strength, and maybe another factor or two. But none of that matters at the next level up. Even whether a particular item is never erased does not matter. At the next level up, the relevant connection can be strengthened or weakened. The result is the same. Similarly, we may recognize patterns with some sort of network, maybe like what Jeff Hawkins talks about, maybe something else. But the function is not completely unlike what a computational neural net does. The similarity gets detected. The details don’t matter (much). And, sorry, I really can’t get into the quantum mumbo jumbo about microtubules. As far as I’ve heard, microtubules are the protein delivery system that keeps things running. I recently read a paper about the RNA structures that keep these deliveries moving.

  21. 21. Lloyd Rice says:

    As for memory, some recent work tends to support the old “grandmother cell” theory, that is one neuron fires per concept. I am skeptical. If anything, I would suspect a small group of neurons firing together per concept. Related concepts could be coded by the firing of different combinations of neurons within the same group. And by “firing”, I refer to a pulse series, maintained perhaps by feedback within the group as well as the inputs that fired the concept to start with. You get a lot more concepts per neuron this way, even if nowhere near the logical max coding is used. My point in all of this is that neither grandmother nor the group idea would have any need for addition memory-related logic within the cell. All of the necessary logic is outside of the neuron.

  22. 22. Doru says:

    Lloyd, I really appreciate your feedback here. You seem to have a lot of insights on the matters.
    My scientific mind is always reductionistic, trying to simplify things, but that doesn’t always work when using the brain to understand how the brain understands.
    I think of “computation” as some sort of operation, calculus or processing, something similar with what computers do. And the big debate is the question if computers will ever be conscious. So far my answer is that they could, but they won’t. It’s just not going to happen. Even new architectures, analog computers, etc.
    Cognition means knowledge. And I think that how we know concepts and recognize things has something to do with motion, and with how those dielectrically polarized waves get propagated through the microtubules. I think of logic as decision, not as operation.
    Thoughts are “material things” (those molecules, proteins, whatever) and the way we are aware of them is when they move. Is all motion creating emotion; the knowledge is in those myriad of little tubes delivering our thoughts and rearranging them in different patterns.
    In other words, it seems to me that we are intelligent being more of a mechanical machine than an electrical one.
    I like the “grandmother cell” idea. I see those firings as the way logic (decisions) being used for storing the impression. In digital world is like when you bring a feedback (an output to the input) for a look-up table to create a latch (storage element).
    I heard that you can become conscious about something if you have around 10,000 of those firings coherent with each other within 30-40 ms.

  23. 23. Lloyd Rice says:

    In the sense of what computers do vs. what brains do, I can’t really see the distinction you are making between “decisions” and “operations”. Anything from a simple “if-then” statement to a neural net analysis produces a result that serves to make a decision. That decision result can be used to do anything, from executing another code statement to feeding more air through the carburator to anything else that might be connected. In other words, decisions become operations. A neuron could fire or not, which could result in activity anywhere else, including frontal lobes for further contemplation or to the striatum, leading to muscle activity. As you probably know from things I have said on these pages, I believe that consciousness is also a result of such firings and can be programmed as well, once we know how to do it.

Leave a Reply