Why no AGI?

AGIAn interesting piece in Aeon by David Deutsch. There was a shorter version in the Guardian, but it just goes to show how even reasonably intelligent editing can mess up a piece. There were several bits in the Guardian version where I was thinking to myself: ooh, he’s missed the point a bit there, he doesn’t really get that: but on reading the full version I found those very points were ones he actually understood very well. In fact he talks a lot of sense and has some real insights.

Not that everything is perfect. Deutsch quite reasonably says that AGI, artificial general intelligence, machines that think like people, must surely be possible. We could establish that by merely pointing out that if the brain does it, then it seems natural that a machine must be able to do it: but Deutsch invokes the universality of computation, something he says he proved in the 1980s. I can’t claim to understand all this in great detail, but I think what he proved was the universality in principle of quantum computation: but the notion of computation used was avowedly broader than Turing computation. So it’s odd that he goes on to credit Babbage with discovering the idea, as a conjecture, and Turing with fully understanding it. He says of Turing:

He concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written.

That seems too sweeping to me: it’s not unlikely that Turing did believe those things, but they go far beyond his rather cautious published claims, something we were sort of talking about last time.

I’m not sure I fully grasp what people mean when they talk about the universality of computation. It seems to be that they mean any given physical state of affairs can be adequately reproduced, or at any rate emulated to any required degree of fidelity, by computational processes. This is probably true: what it perhaps overlooks is that for many commonplace entities there is no satisfactory physical description. I’m not talking about esoteric items here: think of a vehicle, or to be Wittgensteinian, a game. Being able to specify things in fine detail, down to the last atom, is simply no use in either case. There’s no set of descriptions of atom placement that defines all possible vehicles (virtually anything can be a vehicle) and certainly none for all possible games, which given the fogginess of the idea, could easily correspond with any physical state of affairs. These items are defined on a different level of description, in particular one where purposes and meanings exist and are relevant.  So unless I’ve misunderstood, the claimed universality is not as universal as we might have thought.

However, Deutsch goes on to suggest, and quite rightly, I think, that what programmed AIs currently lack is a capacity for creative thought. Endowing them with this, he thinks, will require a philosophical breakthrough. At the moment he believes we still tend to believe that new insights come from induction; whereas ever since Hume there has been a problem over induction, and no-one knows how to write an algorithm which can produce genuine and reliable new inductions.

Deutsch unexpectedly believes that Popperian epistemology has the solution, but has been overlooked. Popper, of course, took the view that scientific method was not about proving a theory but about failing to disprove one: so long as your hypotheses withstood all attempts to prove them false (and so long as they were not cast in cheating ways that made them unfalsifiable) you were entitled to hang on to them.

Maybe this helps to defer the reckoning so far as induction is concerned: it sort of kicks the can down the road indefinitely. The problem, I think, is that the Popperian still has to be able to identify which hypotheses to adopt in the first place; there’s a very large if not infinite choice of possible ones for any given set of circumstances.

I think the answer is recognition: I think recognition is the basic faculty underlying nearly all of human thought. We just recognise that certain inductions, and certain events are that might be cases of cause and effect are sound examples: and our creative thought is very largely powered by recognising aspects of the world we hadn’t spotted before.

The snag is, in my view, that recognition is unformalisable and anomic – lacking in rules. I have a kind of proof of this. In order to apply rules, we have to be able to identify the entities to which the rules should be applied. This identification is a matter of recognising the entities. But recognition cannot itself be based on rules, because that would then require us to identify the entities to which those rules applied – and we’d be caught in a in a vicious circle.

It seems to follow that if no rules can be given for recognition, no algorithm can be constructed either, and so one of the basic elements of thought is just not susceptible to computation. Whether quantum computation is better at this sort of thing than Turing computation is a question I’m not competent to judge, but I’d be surprised if the idea of rule-free algorithms could be shown to make sense for any conception of computation.

So that might be why AGI has not come along very quickly. Deutsch may be right that we need a philosophical breakthrough, although one has to have doubts about whether the philosophers look likely to supply it: perhaps it might be one of those things where the practicalities come first and then the high theory is gradually constructed after the fact. At any rate Deutsch’s piece is a very interesting one, and I think many of his points are good. Perhaps if there were a book-length version I’d find that I actually agree with him completely…

19 thoughts on “Why no AGI?

  1. When will academics stop assuming that the working of the brain has something to with computing ? Because the brain appears to be computing ? Is that sufficient for this massive academic effort, thus far entirely wasted ?

    I have a car. It appears to drink petrol. Do I assume it has a digestive system that breaks down petrol and turns it into vitamins, just because from a million miles away it appears it might do ? Can I make wholesale assumptions about the large structural features of biological organs by making a guess about a subset – yes a subset of features which I choose to isolate ? Can I really asssume human ‘logical capacity’ is more evident of structure than those features incompatible with the computational dogma – consciousness, emotion and the other absolute, non-relative senses? Can I continue to draw on research dollars to perpetuate this ridiculous pursuit, as big a waste of time as the pursuit of the Michaelson-Morly ether ?

  2. @john davey:

    Yeah, I’m amazed people take Deustch seriously when he claims to have proved all reality is computable?

    Beyond Searle, who after all is just a philosopher without experience with computers, I think Jaron Lanier presents good arguments for why these kinds of statements should be met extreme skepticism.

  3. Let’s look at some examples of interesting phenomena in nature:

    Water/hydrologic cycle: the movement of water on Earth – from the sea up to the clouds, then into the mountains, forests, etc. and back again. We certainly can compute that! We certainly have simulations of these movements. But no matter how big our simulations are, computers don’t get wet…

    Photosynthesis: plants (and some bacteria) absorb energy from light, mix it with some chemicals, and they get energy stored in ATP. We simulate and calculate/compute this process from some time now, but still computers need electricity and not just light to work…

    We have simulations of a life cycle of starts: matter is being condensed by gravitation, atoms are being crushed, split, new ones are being created, extreme heat is also produced. We simulate creation of many stars on “a daily basis”, yet our computers aren’t nearly as hot as stars and don’t glow as stars do. Heck! Our simulated stars don’t even produce massive gravitational effects (outside of those simulations) as real stars!

    Now, we try to simulate intelligence (AI / AGI), mind / cognition, emotional machines. Similarly it doesn’t look like these simulations will be intelligent, emotional, etc.

    It’s like looking at the wall at dusk and making animals appear by waving our hands. These are not animals. They just appear to look similarly to us [in some respect].
    It’s the same thing with simulation: those pixels will appear to be intelligent being just as those shadows on the wall appear to be animals.

  4. In order to apply rules, we have to be able to identify the entities to which the rules should be applied. This identification is a matter of recognising the entities. But recognition cannot itself be based on rules, because that would then require us to identify the entities to which those rules applied – and we’d be caught in a in a vicious circle.

    A possible resolution to this might be borrowed, appropriately, from computer science. The regress (or “recursion” in CS terms) isn’t infinite, so long as at each level the recognition process becomes more and more primitive, eventually “bottoming out” in something hardwired in the brain. This could perhaps be achieved in very few iterations, as would be required by any meatspace implementation like the human cortex.

    On a related subject, I disagree a little with Deutsch’s minimization of the significance of prediction. It’s possible that prediction is fundamental to human creativity and may be central to AGI, maybe not as central as Jeff Hawkins gave it in ‘On Intelligence’, but a key element to the algorithm.

  5. Yes, I’m guessing that recognition ‘bottoms out’ in some sort of neural network structure, which seems plausible enough.

  6. Without a putative AGI’s capacity to represent, in analogical terms, the volumetric surrounding world in which it exists, it will not have general intelligence.

  7. Just what is ‘general intelligence,’ anyway, and why is everyone so convinced humans possess it? I actually think the situation is opposite the one sketched by ihtio above: what we call ‘general intelligence’ is the shadow, an illusion born of our inability to cognize cognition.

    Here’s a scenario: AI researchers finally give up on AGI, electing instead to pursue domain specific intelligences exclusively. As the tech becomes cheaper, some researchers begin cobbling these special intelligences together, giving their machines more versatility. They build another special solver, a problem recognizer, which then cues which special solver is required when. In the course of troubleshooting the mistakes made by the problem recognizer, they begin to realize that their specific solvers, despite being designed to solve specific problems, are sometimes able to solve the odd problem outside of domains. Someone has the bright idea of allowing the problem recognizer to continue making mistakes, and add a ‘novel solution recognizer’ to allow the system, as a whole, to continually add to its repertoire of domains.

    At what point does the resulting assemblage begin to resemble ‘general intelligence’? If you ‘black box’ all the design details I give above, would it be fair to say we wouldn’t be able to distinguish it from AGI?

  8. Scott, “general intelligence” is what you’ve pointed at in your own post, that is the ability to solve problems outside of the domain of expertise, solve novel problems, use analogies, build new methods (tools, math, etc.).
    Everyone is convinced that human possess it because we specifically defined GI in such a way as to apply the term to humans.

    I would like to address two points you have made in your post: The claim that we are unable to cognize cognition, and reduction of general intelligence to multitude of specific intelligencies.

    The inability of cognizing cognition idea.

    You frequently mention your idea of inability of cognizing our own minds / intelligence. From what I gather, the three main lines of “attack” for this positions are:
    1) You point to, for example, our biases, fallible intuitions, “blind spots” of the mind (for people who are not familiar with the topic, see works by Daniel Kahneman, e.g. Thinking Fast and Slow, and any materials on behavioral economics).
    2) You also say that human brain is immensely complex and hence a difficult object of study. That is most certainly true.
    3) Our evolutionary origins and history suggest that humans are best fit to solving completely different classes of problems: social (we have always lived in groups and tribes), survival (hunting, running away from beasts), etc.
    However the same arguments should be applied to other sciences as well. Take physics for example. Are we to say that the fact that our intuitions (e.g. riding on a beam of light), difficulty of the subject (subatomic particles, matter being equal to energy, quantum foam, superstrings, what else?!), and our evolutionary unfitness (when did our species ever need to bang atoms together, to calculate complex numbers, to introduce types (cardinalities) of infinity, etc.) make us impossible to gain any understanding of the physical world (in the sense of the basic “building blocks”)? I don’t think so.

    This position is in strike contrast with the known facts. If we try to argue along the lines of “we can’t comprehend our intelligence because we are using the tools we are trying to study”, then the same argumentation holds: why are animals, we constantly improve our understanding of the animals, we are physical systems and we constantly improve our understanding of physical systems, we are intelligent and we strive to understand intelligence (or mind, or cognition). And we use physical stuff to understand physical stuff (above mentioned smashing of atoms), we use animals to understand animals, and we use intelligence to understand intelligence. In each case our understanding won’t encompass all the richness of the phenomena, and that’s ok. All models have limits. Our models of atoms, quantum wickedness, ecology, climate, brains, intelligence all have limits, but they allow us to understand. There is no compelling reason to think differently with regard to mind / intelligence.

    Thusly, I think we are justified in believing we are able to understand (or cognize) cognition, mind and intelligence.

    The reduction of general intelligence to multitude of specific intelligencies.

    You say what we probably couldn’t distinguish between a “real” GI and a black box containing a plethora of intelligencies, each one good for a particular domain. That is probably true. We probably wouldn’t be able to distinguish glucose from an apple and glucose from a black box performing some chemical reactions, but should we say that both of them perform photosynthesis? Probably not.
    The problem maybe not only with labeling. Both birds and planes “fly” (that is the word we use in English, and probably many other languages also use a single word in both situations). As with the photosynthesis example, we are interested not only what is effects the system / black box produces, but also how does it do that.

    Similarly we can build a larger black box with a simple function: for each possible input we write down what the machine has to do. Would we call it intelligent? I wouldn’t.

    The thing with human intelligence is that it does very well in domains it didn’t evolve with. Such problems are governments, mathematics, science, medicine, art, music. For there to be a single module which would identify a problem domain and recruit an intelligence modules specific to it there would have first be all these domain-specific modules / intelligencies. I have real doubt that evolution would lead to the rise of such things. It would certainly be easier to evolve an advanced cortex (as dolphins, humans, other homo, etc.) have/had that is fit to reconfigure itself, adapt, learn and really be intelligent: be able to understand and possibly solve many previously unseen problems.

  9. “However the same arguments should be applied to other sciences as well. Take physics for example. Are we to say that the fact that our intuitions (e.g. riding on a beam of light), difficulty of the subject (subatomic particles, matter being equal to energy, quantum foam, superstrings, what else?!), and our evolutionary unfitness (when did our species ever need to bang atoms together, to calculate complex numbers, to introduce types (cardinalities) of infinity, etc.) make us impossible to gain any understanding of the physical world (in the sense of the basic “building blocks”)? I don’t think so.”

    I liked your summary of BBT! And I’m largely in agreement with the above with a single, quite important proviso: the limit on discursive cognition posed by the bounded nature of human biological cognition is *constitutive* of science, which is why we had to discover its technical and social conditions. Science is pretty clearly a social prosthesis, a way to get around our myriad cognitive shortcomings.

    “You say what we probably couldn’t distinguish between a “real” GI and a black box containing a plethora of intelligencies, each one good for a particular domain. That is probably true. We probably wouldn’t be able to distinguish glucose from an apple and glucose from a black box performing some chemical reactions, but should we say that both of them perform photosynthesis? Probably not.”

    So long as you’re not hawking anything ‘special’ regarding human intelligence, I really don’t care what we decide to call it.

    “The thing with human intelligence is that it does very well in domains it didn’t evolve with. Such problems are governments, mathematics, science, medicine, art, music. For there to be a single module which would identify a problem domain and recruit an intelligence modules specific to it there would have first be all these domain-specific modules / intelligencies. I have real doubt that evolution would lead to the rise of such things. It would certainly be easier to evolve an advanced cortex (as dolphins, humans, other homo, etc.) have/had that is fit to reconfigure itself, adapt, learn and really be intelligent: be able to understand and possibly solve many previously unseen problems.”

    This is the part I don’t get, since I think it pretty clearly turns on a kind of ‘only-game-in-town’ intuition. Where you see breathtaking achievement (vis a vis animal intelligence, I’m guessing), I simply see a very small region of possible problem-solving space. Humans are plastic information processors, possessing the capacity to exapt pre-existing cognitive tools to solve new problems via training and environmental interaction. The capacity for cognitive exaptation obviously paid dividends, because it clearly seems to be something we’ve evolved for. Though this certainly complicates AI research, I don’t see how it could be a deal-breaker.

    And besides, what’s the alternative? That at some point in our history we somehow began channelling something not quite natural, that we began to do *more* than just process information? This is a whopper claim, requiring whopper evidence, which unfortunately deliberative reflection does not provide.

  10. Scott, I’m glad we agree on the nature of science. Maybe science is so great, because it is something more than a human. That is cells make organs which make humans which in turn make culture, and science is a part of our culture.

    Indeed, I don’t find human intelligence especially “special” – it fits nicely on the spectrum of animal intelligence.

    This is the part I don’t get, since I think it pretty clearly turns on a kind of ‘only-game-in-town’ intuition. Where you see breathtaking achievement (vis a vis animal intelligence, I’m guessing), I simply see a very small region of possible problem-solving space. Humans are plastic information processors, possessing the capacity to exapt pre-existing cognitive tools to solve new problems via training and environmental interaction. The capacity for cognitive exaptation obviously paid dividends, because it clearly seems to be something we’ve evolved for. Though this certainly complicates AI research, I don’t see how it could be a deal-breaker.
    To see a small region of possible problem-solving space you should probably be able to specify the whole problem-solving space in the first place. How else you can logically affirm that the space of human (or any other) intelligence is only a small region of possibilities?
    I don’t see humans as information processors (my post on the topic: https://observingideas.wordpress.com/2014/09/08/mind-is-not-an-information-processor/), just as I don’t see climate or a forest as information processors.
    I also don’t think that human mind amounts to exaptations of pre-existing cognitive tools. That would require some strong argumentation. The mere cost of coming up and maintaining such cognitive tools, when they weren’t needed, is enormous (think of ascidians). The environment which would require evolving of such specific cognitive tools (which are necessary for mathematics for example) would surely have to be “interesting”. What I gather is that the cortex has evolved to be a flexible addition that allowed our ancestors to adapt to various and changing environments (almost all continents!), and this is why we are so intelligent (compared to cockroaches): because we have cortices that come up with new cognitive tools (or that it is possible for the cortex to learn a new tool from someone else) and not merely use existing ones.
    And besides, what’s the alternative? That at some point in our history we somehow began channelling something not quite natural, that we began to do *more* than just process information? This is a whopper claim, requiring whopper evidence, which unfortunately deliberative reflection does not provide.
    The alternative I have just sketched above: a flexible neocortex. When it comes to A(G)I, then we’ll have to wait and see 🙂 I hope you don’t expect me to provide a solution to this problem in this discussion :).
    Something [b]more[/b] than processing information? I don’t know what you’re getting at, as I can’t really read any suggestions regarding any supernatural stuff in my post.

    Scott, what is this “information” that you so frequently bring to the discussion?

  11. scott

    i agree that “general intelligence” is meaningless – based upon the religious association that computationalists make between homo sapiens and angels – namely that human minds are pure mathematics, confounding the efforts of generations of darwinists to persuade that homo sapiens is in fact just an animal after all.

    Human beings have “cognitive scope and cognitive limits” (to quote chomsky) like any other animal. It is that scope and limitation that gives humans the capabilities we have. We understand how to deal with space and time without actually knowing what they are or where they come from. We know consciousness but can’t describe it very well, in the same way we can’t describe space or time.

    If it weren’t for our decidedly ungeneral and specific relationship with the world we wouldn’t have a place in it. Imagine how it feels to want to eat, to defacate, have sex or cry. There is nothing general about any of those feelings, and nothing “general” about our understanding of space – which is realised in consciousness in a specific visual way – or time, which is the same.

  12. Ihtio: “Scott, what is this “information” that you so frequently bring to the discussion?”

    Differences making systematic differences, deterministic or stochastic. For me, climates are information processors, just not ones structured to converge upon target states.

    “To see a small region of possible problem-solving space you should probably be able to specify the whole problem-solving space in the first place. How else you can logically affirm that the space of human (or any other) intelligence is only a small region of possibilities?”

    This is a good question. I’m just going on the assumption of mediocrity.

    “I also don’t think that human mind amounts to exaptations of pre-existing cognitive tools. That would require some strong argumentation. The mere cost of coming up and maintaining such cognitive tools, when they weren’t needed, is enormous (think of ascidians). The environment which would require evolving of such specific cognitive tools (which are necessary for mathematics for example) would surely have to be “interesting”. What I gather is that the cortex has evolved to be a flexible addition that allowed our ancestors to adapt to various and changing environments (almost all continents!), and this is why we are so intelligent (compared to cockroaches): because we have cortices that come up with new cognitive tools (or that it is possible for the cortex to learn a new tool from someone else) and not merely use existing ones.”

    And yet repurposing is the rule in biology, is it not? My hunch is that this is what consciousness is primarily for, the stabilization and broadcasting of information for the purposes of ‘bricolage,’ the cobbling together of new ways to solve problems. Ex nihilo cognition cuts against research into things like ‘eureka moments,’ for instance, which shows that even so-called ‘revolutions’ are quite incremental. A lotta little fortuitous accidents piled into some fecund articulation of bios or behaviour seems to be nature’s way.

  13. john: “i agree that “general intelligence” is meaningless – based upon the religious association that computationalists make between homo sapiens and angels – namely that human minds are pure mathematics, confounding the efforts of generations of darwinists to persuade that homo sapiens is in fact just an animal after all.”

    Well, it does have the nice consequence of making them easy to argue against! You just need to back them into their irreducible, symbolic corner, then point out that this makes them mysterians in everything but name.

    Your point about the quiddity of experience is well-taken. Everywhere you rummage around in human experience you find contingencies, fragmentary systematicities, kluge upon kluge and nary a mathesis universalis to be seen! But this is one reason why I keep pounding the metacognitive neglect drum, because the intellectualist yen to find some overarching, formal systematicity explaining things is itself something requiring explanation. It actually explains what makes this superstition so attractive to philosophical reflection.

  14. “and no-one knows how to write an algorithm which can produce genuine and reliable new inductions.”

    We have reliable inductions? Penicillin was discovered by mistake, wasn’t it?

    Surely it’s clear we have more of a shotgun approach – simply doing a bunch of activities, then keeping with the ones that seem to give a result that suits (our biology). Science is a more refined version of this (with various avenues of research that have dead ended)

  15. Scott, maybe you will describe how does processing of differences making systematic differences could occur, for example in computers. I wouldn’t want to hijack the whole discussion thread for this topic.

    One could possibly analyze climate, forests, brains in terms of processing of information or in terms of processing of steam or in terms of processing of heat/energy. The thing is that these general approaches don’t give the details we are after. We could easily describe the whole evolutionary history in terms of changes in energy redistribution across the globe, but that wouldn’t satisfy us. You could try to build computational / information processing accounts of the workings of the brain, but what we’ve seen so far the limits are unbearable, and that – as always has been the case in science – necessitates new perspectives to be developed and explored. One such new perspective is dynamicism (see: Michael Spivey, The Continuous Mind, 2007). There are many others, and many others still will come in near future.
    Stating that a forest, climate, brain IS an information processor is quite a strong metaphysical statement! My point I would wish to make clear is a very weak one: the model / approach is what matters, and “information processing” doesn’t seem to capture interesting aspects of such phenomena as ant colonies, forests, climate, brains / minds / intelligence.

  16. ihtio – “Stating that a forest, climate, brain IS an information processor is quite a strong metaphysical statement! My point I would wish to make clear is a very weak one: the model / approach is what matters, and “information processing” doesn’t seem to capture interesting aspects of such phenomena as ant colonies, forests, climate, brains / minds / intelligence.”

    Well I’m allergic to the notion of taking metaphors metaphysically! All you need do is take a look at, say, the philosophy of quantum mechanics (or contemporary cosmology), to see that we step off an epistemological cliff as soon as we begin pressing past the data (the kind of difference making differences that interest scientists).

    So for me the point is to go in with as few metaphysical commitments as possible. Everyone presupposes systematic differences making systematic differences. What you’re saying is that this isn’t enough–and I agree. Your question was what I meant by information, and there’s no reason why we should think an account of information should do anything more than *contribute* to an understanding of any phenomena. In some cases the complexities overwhelm us, and there seems to be no real way to generalize over the machinery at issue in causal terms, so we have to find other ways to get a handle on the systematicities involved, like DST. This is what we should expect. There’s no real incompatibility here between talking about systems as natural systems, differences making systematic differences, and the kinds of tools we invent to hack those systems.

    The problem is meaning. When most philosophers talk about information, they mean something *semantic,* that the only information is somehow ‘information about.’ But of course, every time we try to understand aboutness in terms of differences making differences, as with semantic externalism, everything unravels.

  17. Hey guys, first post on here, forgive me if posting this link breaks any guidelines.
    This is a relevant clip that may interest you

  18. Scott, who would have thought that we will find ourselves in such a strong agreement?

    The thing that I want to make clear is only that I think that thinking about intelligence / mind in terms of information processing is too limited, analogically we could describe brain in terms of its energy consumption, heat dissipation, etc. These perspectives omit the things we are most interested in. Of course no one knows what the better, established paradigm will be sometime in the future, so it is important to try out different stuff and explore. I’m sure you agree, Scott.
    Maybe there will be another opportunity to discuss the idea of a “difference that makes a difference” and its place in aiding our understanding of intelligence / mind / cognition / brain.

    Simon, I watched the video. I fail to see how it applies to the original post or the discussion. The video is mainly about machines being used instead of humans for labor, and about specialized intelligencies good for specific domains. There was no hint at all at the question of artificial general intelligence.
    If you think that there is something of relevance, please state it explicitly.

  19. I think that the key to AI is an appropriate internal represention, something that typical current efforts ignore completely.

Leave a Reply

Your email address will not be published. Required fields are marked *