Forget AI…

Picture: heraldic whale. … it’s AGI now. I was interested to hear via Robots.net that Artificial General Intelligence had enjoyed a successful second conference recently.

In recent years there seems to have been a general trend in AI research towards more narrow and perhaps more realistic sets of goals; towards achieving particular skills and designing particular modules tied to specific tasks rather than confronting the grand problem of consciousness itself. The proponents of AGI feel that this has gone so far that the terms ‘artificial intelligence’ and AI no longer really designate the topic they’re interested in, the topic of real thinking machines.  ‘An AI’ these days is more likely to refer to the bits of code which direct the hostile goons in a first-person shooter game than to anything with aspirations to real awareness, or even real intelligence.

The mention of  ‘real intelligence’ of course, reminds us that plenty of other terms have been knocked out of shape over the years in this field. It is an old complaint from AI sceptics that roboteers keep grabbing items of psychological vocabulary and redefining them as something simpler and more computable. The claim that machines can learn, for example, remains controversial to some, who would insist that real learning involves understanding, while others don’t see how else you would describe the behaviour of a machine that gathers data and modifies its own behaviour as a result.

I think there is a kind of continuum here, from claims it seems hard to reject to those it seems bonkers to accept, rather like this…

Claim: machines… Objection
add numbers. Really the ‘numbers’ are a human interpretation of meaningless switching operations.
control factory machines. Control implies foresight and intentions whereas machines just follow a set of instructions.
play chess. Playing a game involves expectations and social interaction, which machines don’t really have.
hold conversations Chat-bots merely reshuffle set phrases to give the impression of understanding.
react emotionally There may be machines that display smiley faces or even operate in different ’emotional’ modes, but none of that touches the real business of emotions.

Readers will probably find it easy to improve on this list, but you get the gist. Although there’s something in even the first objection, it seems pointless to me to deny that machines can do addition – and equally pointless to claim that any existing machine experiences emotions – although I don’t rule even that idea out of consideration forever.

I think the most natural reaction is to conclude that in all such cases, but especially in the middling ones, there are two different senses – there’s playing chess and really playing chess. What annoys the sceptics is their perception that AIers have often stolen terms for the easy computable sense when the normal reading is the difficult one laden with understanding, intentionality and affect.

But is this phenomenon not simply an example of the redefinition of terms which science has always introduced? We no longer call whales fish, because biologists decided it made sense to make fish and mammals exclusive categories – although people had been calling whales fish on and off for a long time before that. Aren’t the sceptics on this like diehard whalefishers? Hey, they say, you claimed to be elucidating the nature of fish, but all you’ve done is make it easy for yourself by making the word apply just to piscine fish, the easy ones to deal with. The difficult problem of elucidating the deeper fishiness remains untouched!

The analogy is debatable, but it could be claimed that redefinitions of  ‘intelligence’ and ‘learning’ have actually helped to clarify important distinctions in broadly the way that excluding the whales helped with biological taxonomy. However, I think it’s hard to deny that there has also at times been a certain dilution going on. This kind of thing is not unique to consciousness – look what happened to ‘virtual reality’, which started out as quite a demanding concept, and was soon being used as a marketing term for any program with slight pretensions to 3D graphics.

Anyway, given all that background it would be understandable if the sceptical camp took some pleasure in the idea that the AI people have finally been hoist with their own petard, and that just as the sceptics, over the years, have been forced to talk about ‘real intelligence’ and ‘human-level awareness’, the robot builders now have to talk about ‘artificial general intelligence’.

But you can’t help warming to people who want to take on the big challenge. It was the bold advent of the original AI project which really brought consciousness back on to the agenda of all the other disciplines, and the challenge of computer thought which injected a new burst of creative energy into the philosophy of mind, to take just one example. I think even the sceptics might tacitly feel that things would be a little quiet without the ‘rude mechanicals’: if AGI means they’re back and spoiling for a fight, who could forbear to cheer?

23 thoughts on “Forget AI…

  1. “It seems pointless to me to deny that machines do addition”
    Rather, the point is that machines do NOT ‘do addition’,
    any more than flight simulations ‘do flying’.
    This is more than a play on words.
    Machines can only simulate arithmetic, in that they process NUMERALS, not numbers. The latter require understanding, the former only some (arbitrary) physical representation.

    Just as long division on paper is only a distribution of ink,
    so is a quad-precision division algorithm only a spatio-temporal distribution of microchip activity. On paper it can use various alphabets and numeric bases, all representing the same division, just as there are thousands of different ways microchip simulate division.

    Whether you call it AI or AGI, it’s as fruitless as phlogiston or the philospher’s stone, and nearly as antiquated by progress.

    No matter how many brain emulations or other computer gimmicks are proposed, HAL 9000 is never going to happen because it’s impossible in principle. Get over it.

    P.S.
    AI’s precious Turing Test is nothing but a recipe for a fraud, the same as a well-coached congenitally blind person on the phone conning you about his ‘visual experiences’. However great his success, he stays blind.

    Can we call it ‘The Turing Con’ from now on?

  2. Yes, Bill, I realise there is a valid sense in which computers don’t do addition. I don’t think it’s quite the same as flight simulators, though – the simulators don’t get us to Paris, whereas computers doing addition certainly get us to the answers (or do you think they’re only simulated ‘answers’, no good in real life?). Usage is always a controversial and partly personal matter, but it would seem pointless to me, at any rate, to quibble about the use of the word ‘addition’ in this context – as pointless as going round telling people that their pocket calculators don’t really calculate.

  3. Nice try, Peter, but you are dodging the question by demoting it to ‘quibbling’, which is hardly what you can justly call the vital distinction between numbers and numerals. Numerologists and people who believe in ‘666’ are similarly confused, when they forget that it was originally written ‘DCLXVI’.

    As for calculators, what do you do when it gives you a nonsense answer? You assume you pressed the wrong key and try again, because you know and the machine does not, since it’s only a simulation of mathematical thought, not the real thing. It’s not that the calculator’s results are ‘no good in real life’, it’s that calculators don’t do arithmetic, they just simulate it with a finite number of digits and a machine-characeristic error rate, which is usually ‘good in real life’.

    The complete argument is in ‘Immaterial Aspects of Thought’ at http://www.nd.edu/~afreddos/courses/43151/ross-immateriality.pdf

    Also see Mathematical Beauty and Naturalism
    http://www.acmsonline.org/Howell.pdf

    By the way, ‘usage’ is NOT personal, but interpersonal, and thus can’t be arbitrary.

  4. Amen, Peter.

    As much as I’m in favor of incisive philosophical anaylsis, I’ll only be receptive to “it’s not *really* doing addition” when said philosopher gives me an operationalized definition of what it means to be (human-level) intelligent. Mind you, I don’t mean silly discussions of normative rationality. I mean (human-level) intelligent in all of its ugly, descriptive detail. One that is operationalized enough for a computer scientist to use.

    Very few folks (save your John Pollock’s) in the human-level AI community would tell you that they are engaged in person-building as Bill’s comment intimates. Most of them are not trained philosophers of mind, nor have they been exposed in any serious way to that sort of thinking. Most of them just want to build something. The exercise of building something is the only sure-fire way to falsify grandiose yet otherwise well-intentioned theories of the human cognitive architecture. The more people that try, the smaller the search space becomes. I for one, am all for that.

  5. Bill: Humans don’t do addition, either. It’s all a sham. Numbers are not real. We just make them up. (see Lakoff and Nunez, “Where Mathematics Comes From”).

    On the other hand, the airliner doesn’t crash when they hit the autopilot. So numbers must work, somehow.

    Paul: I suspect that the community of person-builders may be growing more than you suspect. One of the latest conference topics is artificial personhood, by one name or another, such as “Cognitive Machines”. “Cognitive” is a word that gets thrown around at lot, lately.

    Peter, in addition to GOFAI, AI, and AGI, there is also Synthetic Intelligence.

  6. ps. The Wikipedia entry on “synthetic intelligence” has some interesting comments, while syntheticintelligence.com is pure computer software sales pitch. Such terms go down the tubes fast.

  7. Isolating smaller parts of the big problem and trying to solve them little by little should be a good approach. The hard core of the problem may remain untouched for the moment, but every tiny progress in the “peripheral” problems exposes it a bit more.
    Whether it be by playing chess or simply by simulating that they do, certain machines can beat the better human chess players. If being able to play good chess was a trait of true intelligence, as few people would have doubted maybe less than a century ago, we now know that it is algorithmically reducible.
    Are we ever going to build a machine able to pass the Turing test thoroughly? I don’t know. But if we ever do, wouldn’t we know much more about the human mind than we do now, no matter that machine does or only simulate mind functions?
    So, folks of the AI communitiy, please go on building things!

  8. Yes golem-seekers, please keep ‘building things’, but can somebody draw a line in the sand as to how super-huge a computer must be to do ‘real’ AI? Once we have giga-petaflops and there’s still no HAL 9000, can we refrain from moving the goal posts yet again?

    (I was personally told by an ILIAC project programmer that when those 256 processors were turned on in 1970 (2 yrs ahead), ‘Then the human mind would have a new rival.’ Pardon me while I keep on yawning at the today’s inflated version of that notion, ‘The Singularity’.)

    On philosophical principle I’m making an manifestly falsifiable statement, of AI’s absolute impossibility. I would concede philosophical defeat, however, if ever there was a robot as in Disney’s ‘Short Circuit’. Can you specify within even three orders of magnitude how big a Turing machine your concession requires? No fair hooking up to the human brain or to live neurons!

  9. Paul: I had to reread your comment (4) a couple of times. First, I thought you were saying that people should not try to be person-builders. But that’s what Bill is saying. Then, I was thinking, no. It is that people should not try to be person-builders until (or unless) they know what they’re doing. But I doubt there will ever be a Senate bill to that effect. I worry that 1000’s of wanna-be person-builders will give it a shot and we’ll have a few Terminators running around. Yes, Bill. I certainly agree that Short Circuit was among the more fanciful. Hollywood types are not generally well-placed among the best person-builders.

    So then, Paul, I reread once again and I am now thinking, OK, those who try and fail help by adding to the accumulated knowledge of the task. You seem to be saying that the psychologists, or maybe the neurologists, or — Oh No! not the philosophers!! — actually have the best shot at pulling it off. Me, I’ll hold out for the engineers.

  10. In addition to Lloyd’s point that numbers are a human construct, and therefore even humans don’t do “real” addition, we should also note that even if we think a certain human (oneself presumably) does “really” do addition, Bill’s first comment still makes the leap that any human who produces the same output (say “4”) from the same input (say “2 + 2”) is “doing the addition” in the same way. Like it or not, we are constantly performing our own more rigorous versions of Turing tests all the time: “He looks human, sounds human, and gives the right answer, so he must be doing the addition in the correct , ie human, way”. It is hard to see how we can have a meaningful definition of “real addition” that goes beyond the mere “appearance” of addition, so, as Peter says, it seems pointless to question the manner in which computers perform addition.

  11. Lloyd:
    I concur re: the engineers. In the good ol’ days of AI, the intelligence problem was seen as an engineering challenge, mostly because nobody had thier minds around how hard it would ultimately be. On the other hand, guys like Minsky, Winston, McDermott etc. were intellectually broad in ways that most AI researchers these days couldn’t hope to be. We are hoping that a new generation of folks like this will crop up to take thier place.

    As sub-areas of AI get explored, there is more to know, and ultimately researchers get stuck in local minima (eg. they start thinking that building the best reinforcement learner/POMDP is somehow making real progress toward AI). In fact, this in a nutshell, is the trouble. They have either been sucked toward useless normative characterization of algorithms and/or have become slaves to performance optimization — both failure modes of bad engineers when they are trying to build something that they ultimately don’t understand yet. Firstly, I think it’s somewhat presumptuous to assume that evolution provided us with a cognitive architecture that can be characterized normatively, and secondly it’s clear that in many contexts, the output of said architecture is far beneath normative standards for rationality. So I for one am not even sure what these guys and gals are working on — but it sure ain’t human-like.

    The best analogy I can make concerns the Wright brothers, who clearly didn’t model microfeatures of birds in order to build a flying machine. Today’s “lets model a brain neuron-by-neuron” seems to be precisely what kind of silliness successful engineers like the Wright brothers managed to stay far away from. Neither did they watch birds fly and develop closed-form mathematical expressions for the patterns they saw so that they could publish a peer-reviewed article.

    Regarding person-building and AGI, all that I can say is that I’m *very* familiar with most of the faces in the AGI crowd, since I’m technically one of them. It’s sort of bimodal: there are a group of folks interested in transhumanism and the singularity, which is probably where you’ll find most of the kooky person-building talk, and then there are an older crowd of seasoned AI vets who have re-emerged to tackle the old problems, because its no longer a pariah to talk about trying. Even though I’m young, I more identify with the latter group.

    Bill:
    I think you are rightly sceptical of folks who believe that increases in computing power will ultimately lead to systems that are more intelligent. More horsepower is clearly not going to be a proximal cause of such a state of affairs, but ultimately, large-scale data access will be required, and none of us are sure how much. If the history of AI tells us anything, it’s that more power doesn’t exorcise our conceptual demons. Look at Deep Blue. It beat Kasparov by examining billions upon billions of board configurations. So what? Most machines still can’t reason about simple physical interactions between objects in the way that 6 month old infants can. It’s a paradox. What this implies is that there are certain things that we still don’t have right. We haven’t figured out which domain-general reasoning mechanisms are at work — we haven’t adapted our current set of useful inference algorithms to capture the richness of inference that is going on even in little babies, and if there are many mechanisms at work to produce this inference, we haven’t adepquately captured how they integrate and mutually constrain one another. These are the set of questions that some of the “human-level AI” crowd are trying to answer. I say God bless. Someone in the community has to take thier heads out of thier backsides and do this kind of stuff.

  12. Bill, I’m afraid one of your comments – number 3 – was temporarily detained by my anti-spam software – or ‘detained’ 😉 ?

    All I’ll say is that my position on all this is more complex (more confused, perhaps) than you seem to think (as much of this site will bear witness)

  13. I totally concurr with Neil,

    As an AI researcher myself, I’m tired of this unfair comparison, where philosophers critique the mechanism computers use to perform operations when we have no idea how the human mechanisms work… I’d like to hear a single solid argument supporting the fact that “humans do the real arithmetic” instead of just simulating it…

  14. Peter: Could you edit the reference in my comment 9 to cite Paul’s earlier comment as (4) rather than (3), which is now Bill’s comment? Thanks.

    Bill: I now must address the two papers you cited in your (restored) comment 3. First, as for the Howell paper, I believe it does not actually contradict the Lakoff/Nunez book I referred to in comment 5. The difference, I believe, is that the book comes a bit closer to descibing how such things might actually work in the human mind. The point, relevant to the present discussion, is that it should not be insurmountably difficult to describe how such metaphorical constructs could be computed. We first need to develop the engineering technology to be able to work with similarities rather than exactitudes. These efforts are just barely underway (give or take a decade). However “mysterious” these mathematical architectures might seem to be, it seems (at least to me) entirely plausible that they can indeed be explained and described computationally.

    As for the Ross paper, I claim that it is complete garbage. This is unfortunate because you apparently have some considerable emotional stock in its assertions. To be specific, the paper begins (p 136) by making a declaration without further evidence, then proceeds to build an argument around that declaration. Later (in section III), comes the proposition, “Suppose our initial declaration was false”. What follows, I claim, is pure pinhead dancing, pure philosophical gobble-de-gook. I believe the Lakoff/Nunez book does in fact set out procedures whereby humans can achieve uasable mathematical results, even though the process is built upon finite metaphorical procedures which may turn out to be riddled with faults and inconsistencies. Whether or not the universe has a pure and precise inner core is not relevant to the point. What we have to work with is our own imprecise view of that universe and, somehow, we manage to make it work.

    The remaining issue then is whether there is anything “magical” about our human brains that allows us to invent and use such metaphorical structures. I believe your referenced papers do not address this question, except perhaps in one brief final musing in the Howell paper. In asking “How do we do it?”, I prefer to avoid the heaven-based answers.

    Has evolution achieved some sort of architecture that is not reachable by the application of our scientific and engineering procedures? There is an interesting story on this question: I think it may have come up in a UW computer science lecture (or maybe somewhere else?). Some students were studying a petri system of neurons and methodically sketched out an equivalent electrical circuit. They took the circuit to an electronics engineer and asked for an explanation of how it worked. The engineer took one look at the seemingly haphazard goulash of op-amps, caps and resistors and asked, “Who designed this outrageous circuit?” “Evolution”, came the reply. “Well,” said the engineer, “I could not begin to analyze this garbage. Evolution does not follow logically coherent design rules.”

    That’s not to say that equivalent human-designed circuits are impossible, but only that following the evolution-designed example may not be a good route to success. This suggests immense hurdles to be overcome by those who would try to design computational intelligence by studying our brains. That significant effort will more likely need to be pursued by following the more traditional engineering methods we have so diligently developed. I believe that the evidence that calculators and autopilots do actually work is sufficient evidence that evolution has indeed achieved a workable result. How hard it will be for us to achieve an equivalent result in our computational machines remains to be seen.

  15. Math Mystery
    It seems to me that the underlying issue here is not about whether or not brains apply some mysterious algorithm or faculty when they do addition but about why brains have a need to perform addition or mathematics in general. I, for one, learned addition in grammar school by a process of rote memory which I still use to this day. Surely this in not fundamentally differently that a purely mechanistic computer algorithm. The difference here is that humans have an inherent understanding of relative value which results from the fact that brains have evolved to deal with causal relations imposed by the external world on their host organisms. Of course, computers and calculators do not have any such understandings. More importantly, no matter how abstractly we define any aspect of mathematics, including addition, the logical axiomatic sets that define such operations actually derive from very basic understandings of physical causality and the natural experiential logic common to ordinary brains. Quite simply, mathematics is a contrivance that has value for humans only because it facilitates or extends causal understanding. Ultimately, no matter how myopically we examine these abstract matters we can never find a complete answer because it is – in fact – physical analogy that absolutely defines here – thank you Dr. Godel.

  16. I do not think marvin minsky has that idea about AI, if you read his last book that is online
    he explains, or makes a model about how he thinks the brain works, and how the brain
    parts work and interact among themselves. So it is defitinely not an aproach oriented to solve a
    certain task such as be a chess champion at all.

    The problem is that it came to a point when most people thought AI was a failure,
    that it was never going to get anywhere, AI researchers decided to take simpler or smaller
    tasks and focus on them, but not to isolate them for all the other, it was just a bottom
    up approach to reach AI objective.

    Another problem is that since we do not know what the brain/conscious/mind is and the popular understanding of
    AI, is that is meant to create a functional human being, the idea of what AI means results confusing.
    It is easy to state arguments that demonstrate AI is worthless (of course they are based
    on our false notion of consciousness/brain/mind)

  17. Processing power is important, but it is far less important than the algorithm. A fast computer will come up with the same wrong answer as a slower computer, only faster. I agree with Bill… every year that goes by the more obvious it will be that HAL is never going to happen. No matter how you arrange the ones and zeroes inside a computer, it will never *feel* anything.

  18. We are going to crack it. surely 7 billion people with exponentially growing technologies can build something smarter than one of them. And when we do surpass ourselves it will be amazing 🙂
    cmon agi wooop!

  19. Andy T:
    I have done a fair amount of reading since I posted comment 2 in this blogs topic “Cryptic Consciousness”, and I am finding that more and more authors agree with the view that consciousness will arise without further “magic” once the required functions are assembled and turned on. And the list of functions needed is not all that long. Clearly, you need sensory input, perceptual processing, and memory. Most agree that you need some sort of “value” system to determine how memory contents are stored and accessed. This would include both the emotion centere such as performed by the amygdala in animals, and several other life situation evaluators, as well as some sort of “activation” system, such as the neuraltransmitter distribution system. These systems make major contributions to the operation of an attention system, which governs the memory/attention/perception loop. That’s about it. Connect it up, turn it on, and viola!

  20. The argument in comment one:

    “Just as long division on paper is only a distribution of ink,
    so is a quad-precision division algorithm only a spatio-temporal distribution of microchip activity. On paper it can use various alphabets and numeric bases, all representing the same division, just as there are thousands of different ways microchip simulate division.”

    can be applied to the mysterious process of brain based addition. a mathematical operation happens within the brain and brains are just a complex web of signals. thus brain based math is a “spatio-temporal distribution of [neuron] activity.” Thus if you argue that paper division and computer division is a simulation of math, then you also assert that brains are a simulation of math. if brains cannot perform true math, I cannot see a use for the term true math.

    Math is a symbolic system where numbers and letters and other shapes represent physical objects or each other. The multiple levels of math symbols allow integrals to be written out as limits of Riemann sums, which are a short hand for a drawing of a curve broken into rectangles. Math is the process by which physical objects are represented and those representations combine to form more and more complex representations. At each end of that process the mind can make analogies and understand but in the middle of that process math is only internally defined. It is like when an algebraic equation is being manipulated there are forms of that which are meaningless to the human mind but when arranged differently make perfect sense. Such as the formula for interest as a=pe^rt but written as ln(a/p)=rt. That is not obviously the compound interest formula. In order to make that understandable we have to solve for A and recognize it as interest, then back track it to a wordy “the natural logarithm of amount over principle equals the rate times the time”. Even then natural logarithm is a symbol short hand for all of the concepts of exponentiation and inverse functions. This is a poor example because the symbolic representation of the phenomena that is compound interest, is rather simple. But it illustrates my point that math is not a phenomenon in its own right but a symbolic aggregation of lower level physical objects. Computers use a method of symbolic representation in order to perform their algorithms and thus are similar to math.

    What separates human math from computer math is not the math itself but the range of symbolic representations stored in the memory. Humans can do math in an algorithmic way. They may do mental math shortcuts but they can do long division in their head. The difference comes in interpretation. Computer do the math and say her is the answer. People do the math and instantly see that it has a link to the problem. In math textbooks there are typically straight forward can you evaluate this expression into a number problems and then word problems. At this time there is no difference in how (at the cognition level) people and computers sole the straight forward ones. But word problems call for feedback because they posit a scenario that requires abstraction into a straight forward problem and then interpretation back at the word problem level. Computers do not reinterpret their solutions to mean some physical object or at least the memory of one (even though I am not seeing a train I can solve problems about them). As humans, we have the repertoire of symbols all the way up and down the spectrum of complexity; while computers have very localized memory and their symbolic groups are self contained. This implies that one group does not interpret mathematical answers with reference to the pictures of trains store on the hard drive.

    Thus the main area of AI research should be in creating more diffuse memory where the symbols stored are able to be triggered by other symbols in a less ordered way. Thinking is chaotic–like a good conversation–it should bounce around from one train of thought to another. This may be a feature of the chaotic circuits found in comment 14 by Lloyd Rice. But either way making programs more linear is not the way to create general consciousness.

    An example of human interpretation is the equivalence class defined as: m is congruent to n (mod7) where n=1.

    A computer could produce the set {1,8,15…}

    But it will not interpret that same class as the set of all Mondays. A human mind sees that effortlessly but computer programs do not have that kind of scattered or diffuse process. Thus a huge repertoire of symbols is necessary for understanding. But memory size itself is not the only requirement the access of that memory must be constantly self referential.

  21. Hear, hear, James. You are correct that memory has to be extremely flexible — and everywhere.
    Attention seems to be the significant factor in deciding what gets stored, but in general, almost everything is stored, at least for a while. The “AI” theme of “associative access” barely scratches the surface. I have not yet figured out an even halfway reasonable way to implement this in a contemporary computer.

  22. It’s pure speculation at this point in time, but it is possible that we find out what our consciousness is (electromagnetic field or whatever) and can generate something similar in the future.

    Of course, if we do find out what our consciousness is, we will find ways to improve upon it, increase its calculating, feeling, etc. capacities and be able to create something which surpasses us. I doubt this scenario however, as I see us creating extensions for our own consciousness, hurtling ourselves forwards along the cutting edge of mental evolution…

  23. Computers don’t do any addition at all of course, so the discussion is moot ….

    we interpret the microchip voltage levels and determine the properties of the computer that hold ‘information’. Computers don’t actually do anything as phenomenologically they don’t actually exist, at least not on the sense that most matter does. They have a conceptual existence only.

    Not only that, most modern day chips don’t even do formal arithmetic anymore in any case, and use all kinds of shortcuts – they don’t even pretend to be like us any more !

    Instead of talking about computers, AI people should talk about ‘algorithms’. Once they did, they’d give up with all this hard AI nonsense.

Leave a Reply

Your email address will not be published. Required fields are marked *