It’s not Intelligence

The robots are (still) coming. Thanks to Jesus Olmo for this TED video of Sam Harris presenting what we could loosely say is a more sensible version of some Singularity arguments. He doesn’t require Moore’s Law to go on working, and he doesn’t need us to accept the idea of an exponential acceleration in AI self-development. He just thinks AI is bound to go on getting better; if it goes on getting better, at some stage it overtakes us; and eventually perhaps it gets to the point where we figure in its mighty projects about the way ants on some real estate feature in ours.

Getting better, overtaking us; better at what? One weakness of Harris’ case is that he talks just about intelligence, as though that single quality were an unproblematic universal yardstick for both AI and human achievement. Really though, I think we’re talking about three quite radically different things.

First, there’s computation; the capacity, roughly speaking, to move numbers around according to rules. There can be no doubt that computers keep getting faster at doing this; the question is whether it matters. One of Harris’ arguments is that computers go millions of times faster than the brain so that a thinking AI will have the equivalent of thousands of years of thinking time while the humans are still getting comfy in their chairs. No-one who has used a word processor and a spreadsheet for the last twenty years will find this at all plausible: the machines we’re using now are so much more powerful than the ones we started with that the comparison defeats metaphor, but we still sit around waiting for them to finish. OK, it’s true that for many tasks that are computationally straightforward – balancing an inherently unstable plane with minute control adjustments, perhaps – computers are so fast they can do things far beyond our range. But to assume that thinking about problems in a human sort of way is a task that scales with speed of computation just begs the question. How fast are neurons? We don’t really understand them well enough to say. It’s quite possible they are in some sense fast enough to get close to a natural optimum. Maybe we should make a robot that runs a million times faster than a cheetah first and then come back to the brain.

The second quality we’re dealing with is inventiveness; whatever capacity it is that allows us to keep on designing better machines. I doubt this is really a single capacity; in some ways I’m not sure it’s a capacity at all. For one thing, to devise the next great idea you have to be on the right page. Darwin and Wallace both came up with the survival of the fittest because both had been exposed to theories of evolution, both had studied the profusion of species in tropical environments, and both had read Malthus. You cannot devise a brilliant new chip design if you have no idea how the old chips worked. Second, the technology has to be available. Hero of Alexandria could design a steam engine, but without the metallurgy to make strong boilers, he couldn’t have gone anywhere with the idea. The basic concept of television was around since films and telegraph came together in someone’s mind, but it took a series of distinct advances in technology to make it feasible. In short, there is a certain order in these things; you do need a certain quality of originality, but again it’s plausible that humans already have enough for something like maximum progress, given the right conditions. Of course so far as AI is concerned, there are few signs of any genuinely original thought being achieved to date, and every possibility that mere computation is not enough.

Third is the quality of agency. If AIs are going to take over, they need desires, plans, and intentions. My perception is that we’re still at zero on this; we have no idea how it works and existing AIs do nothing better than an imitation of agency (often still a poor one). Even supposing eventual success, this is not a field in which AI can overtake us; you either are or are not an agent; there’s no such thing as hyper-agency or being a million times more responsible for your actions.

So the progress of AI with computationally tractable tasks gives no particular reason to think humans are being overtaken generally, or are ever likely to be in certain important respects. But that’s only part of the argument. A point that may be more important is simply that the the three capacities are detachable. So there is no reason to think that an AI with agency automatically has blistering computational speed, or original imagination beyond human capacity. If those things can be achieved by slave machines that lack agency, then they are just as readily available to human beings as to the malevolent AIs, so the rebel bots have no natural advantage over any of us.

I might be biased over this because I’ve been impatient with the corny ‘robots take over’ plot line since I was an Asimov-loving teenager. I think in some minds (not Harris’s) these concerns are literal proxies for a deeper and more metaphorical worry that admiring machines might lead us to think of ourselves as mechanical in ways that affect our treatment of human beings. So the robots might sort of take over our thinking even if they don’t literally march around zapping us with ray guns.

Concerns like this are not altogether unjustified, but they rest on the idea that our personhood and agency will eventually be reduced to computation. Perhaps when we eventually come to understand them better, that understanding will actually tell us something quite different?

50 thoughts on “It’s not Intelligence

  1. Nevertheless, keep your eye on this issue, Peter, and not because of ‘superintelligence,’ but because of the myriad *other* ways AI is *already* permeating our world. No matter what you think agency consists in, the fact remains it is easily faked in controlled contexts. As an incredibly heuristic form of cognition, it’s reliability is utterly dependent on ancestral backgrounds, cognitive ecologies that AI is beginning to overthrow, and threatens to sweep away. The Japanese ‘herbivore men’ could very well prove to be the canary in the social cognition coal mine.

  2. “If AIs are going to take over, they need desires, plans, and intentions.”
    Thank you. This is a point that often gets lost. We have a tendency to project our worst fears on AI, but they won’t be motivated to take over the world unless we put that self actualizing motivation there. If we don’t put it there, if we keep their agenda as a subset of our own agendas, the probability that we’ll have much to worry about is low.

    Many people insist that what we *actually* have to worry about is unintended consequences. But this is a danger we already face, and so far we’ve been able to handle it. As intelligence increases in automated systems, the probability that they’ll misunderstand what we want them to do will actually decrease. Will they sometimes do the wrong thing out of confusion? Sure, but that’s always going to be true. Let’s not make one super AI and put it in charge, at least without balancing it with other super AIs.

    I do think AI will eventually be able to surpass us, but while it might have profound effects on future societies, humans being cast aside doesn’t seem like it will be one of them. A car can go faster than a person can run, a plane can fly while we can’t, and various types of machinery can move things that human laborers have no chance of moving. Computational systems can already do things that no human could with paper and pencil in a lifetime. The future might be all of us hanging around playing games while the robots do all the drudgery. I’d be okay with that.

  3. You could probably make things even more practical by saying “be careful what you automate”. Mere automation is “agency” of sorts, and it has already proven to be hazardous in our own age. Let’s face it, humans are lazy. If we’re doomed to AI tyranny it will probably gradually creep upon us via a combination of our dream to have machines do our work for us and the incremental sophistication of those machines. And the dangers have already been with us for quite a while. Automated systems to control power plant cooling; automated diagnostics in medicine that indicate false positive or negative. During the first moon landing, the LEM’s computer was taking the craft into a boulder field and Neil Armstrong had to switch to manual to avert disaster. (Not that I think Armstrong and Aldrin would rather have been having a scotch and cigar party on the way down. I think the issue here was “let the computer do it better”)

    If AI eventually spells disaster, the proximate cause will be our own desire for leisure.

  4. SelfAwarePatterns: “Many people insist that what we *actually* have to worry about is unintended consequences. But this is a danger we already face, and so far we’ve been able to handle it. As intelligence increases in automated systems, the probability that they’ll misunderstand what we want them to do will actually decrease. Will they sometimes do the wrong thing out of confusion? Sure, but that’s always going to be true. Let’s not make one super AI and put it in charge, at least without balancing it with other super AIs.”

    There’s no money in making God. I appreciate the narrative appeal of general superintelligence, but I really think the way to look at the issue is via the lens of AI as it is presently being used today, and will be increasingly used in the future. To make money. What we are presently witnessing is an explosion in special artificial intelligences, and it seems safe to suppose that the future will see these artificial specialists increase their specialized problem-solving power, and proliferate through an endless number of economic niches. As it stands, we have Ashley Madison spoofing male members with ‘fem-bots,’ we have Siri bungling texts, but doing quite well with alarms: it’s pretty basic stuff. But as Sherry Turkle’s and Clifford Nass’s research shows, cuing social cognitive systems are very easy to do. We don’t need any of the grand breakthroughs that Peter suggests, just the right–to use the Magic Leap term–biomimetics, reliable ways to cue sociocognitive modes. And from a commercial standpoint, the most important problem to solve are wallets, and the only way to our wallets is through us.

    Neil Lawrence provides a wonderful analogical lens through which to understand the problem–which I think is far, *far* more pressing than any worries about superintelligence. Check out: http://inverseprobability.com/2015/12/04/what-kind-of-ai

  5. I’ve never understood why people take Harris seriously. An interesting polemicist to some maybe, but an accomplished intellectual with fresh insights he ain’t. Very humdrum

  6. A few posts ago in “A Case for Human Thinking” we noted that even now neural networks are becoming more inscrutable and being given more responsibility. Because they are inscrutable and ‘brittle’ they are prone to the occasional catastrophic failure. Neural nets are given tasks by their owners, so they have agency, even at one remove. Regarding computation, we don’t really know enough about how human brains do what they do to say which tasks are computational and which are not. If we think there are capacities that human brains possess that are not reducible to computation how have we determined what those capacities are and how have we concluded they are not reducible to computation? Regarding the ability of machines to design their successors:

    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.9691&rep=rep1&type=pdf

    It’s early yet, but the idea of machines designing machines isn’t far fetched. I don’t think we need something as clever as H.A.L. 9000 or Skynet to get humanity into a whole heap of trouble.

  7. “If AIs are going to take over, they need desires.” Yes. Unfortunately, if AIs are going to do the useful work we want, they also need desires, or close enough as to make little observable difference. An AI librarian needs to seek valuable knowledge and avoid trivia. An AI driver needs to value the lives of pedestrians, drivers of other vehicles, etc., or act as if it does. You can get some mileage out of a legalistic rule-book approach to driving or library research, but an intelligent valuation is much superior.

    Today’s computers lack convincing “beliefs” as much as they lack substantial “desires”. I don’t think it’s a coincidence; these realms are not as separate as many philosophers suppose. AI researchers have terrific incentive to tackle both problems (which I suspect is really one problem) and I don’t see what the insuperable obstacle is supposed to be.

  8. Even cheese can have agency, so it can’t be much harder for AI!
    http://www.sciencedirect.com/science/article/pii/S0160738310001660

    As to inventiveness, as I understand it the architecture of the human brain is not that different from that of other higher mammals – there is just a bit more of it. By analogy, even if it not as simple as connecting up enough current computers together, it might be as simple as putting together enough neural networks (Turing seemed to suggest that you needed to have enough different machines to get creativity in the narrow field of theorem proving). If the human brain does not have access to infinite memory, then I guess it might be a computationally tractable task to simulate it successfully according to Braverman et al (2015)
    http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.098701

  9. > “If AIs are going to take over, they need desires, plans, and intentions.”

    I disagree with this view, or at least with the view that there is some difficulty in granting desires to AIs. Having a goal is equivalent to having a desire for the purposes of this argument, and goal-seeking behaviour is clearly not a problem for AIs. This is all an AI needs to be an agent in every relevant sense.

    Intelligence can be understood as an attribute that potentially grants an agent a phenomenal amount of power in achieving its goals — that is how humans have come to dominate the earth despite or physical weaknesses and vulnerabilities. Since we give the AI its goals, on the face of it this is no major problem. But the danger is that it also grants an agent a phenomenal amount of power in achieving instrumental sub-goals it chooses for itself, and those sub-goals may not be aligned with our interests at all.

    The somewhat facetious example Nick Bostrom gives is that we task an AI with building as many paper clips as quickly and efficiently as it can. So it first enslaves the human race and then later replaces them with automated factories and ultimately converts the whole earth into paperclips, but perhaps not before sending out Von Neumann probes to seed the galaxy with paperclip-making factories, until eventually the mass of the Milky Way is mostly accounted for by paperclips.

    So intelligence is dangerous because it may allow AIs to think outside the box in ways we have not anticipated, and to leverage their intellectual abilities to achieve subgoals we deem to be disastrous.

    That said, I’m not really all that worried. It seems likely to me that we will evolve our ability to control our AIs at approximately the same pace as they grow in ability, so that by the time we can build AIs more intelligent than ourselves we will hopefully have some idea what kinds of precautions and safety measures to put in place. Just having the AI having a goal of explaining its plan of action and seeking approval in advance would be a start, if not a full-proof one (the AI might understand that the best way to achieve the goal of seeking approval for its plans is through deception or obfuscation).

  10. Myself I think real intentions require the ability to think about the future. Computational processes don’t do ‘about’ so they can only imitate agency.

  11. Peter, I wonder what you would see as the distinctions between thinking about the future and holding data isomorphic to and predictive of some aspect of the external environment as a guide to action?

    I’m thinking about self driving cars, which build models of the environment, including predictive models of what other cars, pedestrians, and bike riders might do, as a guide to the car’s actions. Their predictive models are nowhere near the sophistication of a human brain’s, but it seems, as they become increasingly successful, like they may exceed the sophistication of the models of less intelligent creatures (ants, bees, lampreys, etc).

  12. Hi Peter,

    > Myself I think real intentions require the ability to think about the future. Computational processes don’t do ‘about’ so they can only imitate agency.

    I disagree with that but let’s assume you’re right.

    I don’t see how this matters as the imitation of agency is all AIs need to be a threat. It doesn’t matter if they don’t have genuine intentionality as long as they behave as if they do.

  13. SAP – I’d say the machine doesn’t know that its data is isomorphic to and predictive of the world, or a model of anything. We read it that way – indeed we go to some lengths to ensure it can be interpreted that way, otherwise the machine wouldn’t work – but it’s only in our minds that it means anything. A self-driving car is never really going to come up with new plans.

    Disagreeable – well, yes a machine can be a threat without even pretending to intentionality after all. But I think mere imitation is a lot less scary. Ants and bees have got incredibly sophisticated imitations of intentional behaviour written into them by evolution, but an ant’s nest would not fare well against a human city even if we could somehow even out the scale difference.

  14. Hi Peter,

    > But I think mere imitation is a lot less scary.

    I took you to be referring obliquely to the arguments made by the likes of Searle that an algorithmic machine can never have true intentionality, only the facsimile of intentionality. Even if we buy these arguments (which I don’t), they don’t place any constraints on how convincing this facsimile might be, and so they do not constrain the behaviour of the machine. Searle for instance freely admits that it might be in possible to build a machine just as capable as a human (or more), though he claims that it wouldn’t have true intentionality. Per The Chinese Room, it would functionally and behaviouristically “understand” just as well as a human, but it wouldn’t be really understanding anything.

    If you’re saying this lack of true intentionality is any kind of reason not to fear the machines, then you must not be appealing to these kinds of arguments. So it seems you have some independent reasons for thinking that there are such constraints?

  15. To Peter (10)
    As I wondered in my comment (6), how do you in particular and scientists (or philosophers) in general know what human brain activities are (or can be reduced to) computational activities and which cannot? If one accepts that human mental activities are fundamentally neurological then any claim that some neurological activity is or is not computational is a claim to knowledge about how human brains operate that I’m not sure the available scientific data warrant.

  16. “I’d say the machine doesn’t know that its data is isomorphic to and predictive of the world, or a model of anything.”

    Depending on exactly what we mean by “know” here, I’d agree, for current machines. Knowing that we know something seems to require a certain level of introspective sophistication. But I can’t see that bees, ants, and lampreys know it either, or any other arthropod or pre-mammalian vertebrate. I actually doubt most mammals and birds do either.

    If we only compare machines to humans, the gulf between machine and biological intelligence can seem immense and unbridgeable. If we compare machines to less intelligent animals, the gulf, at least in terms of intelligence, is much narrower. (Life being composed of molecular machinery, the gap in physical capabilities remains far wider, for now; see this week’s Nobel prize for chemistry.)

    On the other hand, the level we have to drop to in biological species to have a possibly meaningful functional comparison also reveals just how far computational systems still are from having anything like human level cognition, much less the superhuman variety. These species typically have only a few million neurons or less, as compared to the 86 billion in a human brain.

  17. “It wouldn’t be really understanding anything”
    “Depending on exactly what we mean by ‘know'”

    Like “belief”, these words are from a vocabulary that isn’t well suited to discussing the capabilities of entities assumed not to have social relationships. Unless we can reify such relationships for humans or incorporate some functional equivalent into machines, comparing the abilities of humans and machines usng those words seems inappropriate.

  18. I perhaps should be clear that while I question the usage in Searles’ quote, my comment wwas intended to be merely an elaboration on SAP’s quote, with which I agree.

  19. Scott #4,
    I agree. I’m sure someone will eventually build what we’d call a general intelligence in a lab, just to prove it can be done. But “general” in this sense seems to mean an intelligence with drives and motivations similar to us, drives and emotions we only have because we are the result of billions of years of evolution.

    It doesn’t seem like there would be much of a market for that kind of intelligence. I don’t want my self driving car to have its own agenda and ambitions. The only ambitions I want it to have is getting me from point A to point B on a reliable and consistent basis. Even in cases where we want the system to act like it’s a person in its own right, we really want it to only be an act that never continues into actually having its own agenda, essentially turning it into a kind of slave.

    Fortunately, there’s no reason to build such a system for us to get the overwhelming majority of benefits from AI.

  20. Charles,
    In general I agree. But I think part of the problem is that words like “belief” and “know” aren’t usually unpacked, but when we have these kinds of discussions, it seems like they have to be.

  21. SelfAware (#20),
    We might not want an AI to “have its own agenda” in the most full blooded sense, but corporations do want workers that are flexible, innovative, etc. Suppose you want an AI to do engineering design, for example. Then you don’t just want it to improve your existing product line. You want it to think outside that box.

    The trouble is, something that can climb outside one box, may well be able to climb outside another that you hold more sacred. Of course, corporations that fear unpredictable AI will have the option to refrain – and be out-competed by those that are less risk-averse.

  22. Paul #22,
    I agree with everything you said. But it seems like the permutations of everything that could conceivably go wrong with unpredictable AIs is vast, and the permutations of those wrong things being dangerous are, relatively speaking, an infinitesimal slice of those permutations. Most wrong things are simply going to be nonfunctional.

    But I do definitely think we should avoid creating one super AI, putting it in charge without safeguards, and hoping for the best. It’s pretty well understood right now that we don’t want to let deep learning neural networks make action decisions without human approval. As things progress, I can see a group of perhaps guardian or police AIs being tasked with ensuring that other malfunctioning AIs don’t do anything dangerous.

  23. SAP (post #2), the whole point of AI is to not be in control – the more finely you have someone monitoring the agenda put into the AI the more you could just have that person doing the task themselves! The more you delegate power and further more, the whole point of AI which is adaptive behavior (you don’t need AI for rote activities – robot assembly arms prove that), the more that behavior adapts outside of your perception. When this happens with our children, we treat it as the next generation starting to take up the baton of the human race.

  24. Callan,
    There’s a difference between setting the goals and motivations of an AI and controlling all of its actions. Obviously if we have to program every decision an AI makes, it’s pointless. The whole point of AI is that it can solve problems. But it’s goals and motivations set the scope of where its problem solving abilities will be focused. A self driving car’s motivations will only be to be the best self driving car it can.

    Humans, of course, have our own goals and motivations. Similar to all animals, we want to survive by maintaining homeostasis, to procreate, and in the case of social species, to protect and nurture kin. We have these motivations as a result of billions of years of evolution.

    But AIs are not going to have that legacy, at least not unless we decide to give it to them. If we do give AI those biological goals as their most primal motivations, but then insist they do our chores, we’ll deserve the resulting AI revolt. But there’s no good reason to give them those motivations in the first place, to give them any motivation other than being a useful tool.

  25. SAP,

    A self driving car’s motivations will only be to be the best self driving car it can.

    I assume we’re both taking it that current self driving cars are not AI – not in the sense we are talking about, anyway.

    That established, err, why are you describing it that the car’s motivation will definitely meet exactly that outcome (and no other outcome)?

    It seems you’ve entered a anthropomorphism – and I know I’ll seem less credible for hypothesizing that. But to dig my hole even deeper : it’s like you’re treating the cars motivation as you might do for your own motivations. As if you decide such motivations. Somehow absolutely.

    But there’s no good reason to give them those motivations in the first place, to give them any motivation other than being a useful tool.

    I would argue that any genuinely adaptive AI is a machine that rewrites it’s own motivations. It’s the very fundament of ‘Do the same thing but expect a different result’ – when the world wont change, to keep doing the same thing is a madness. When the world wont change, you must change. To whit, your motivations must be rewritten.

    Heck, even if you try to hard code core motivations, Asimov already explores plausible seeming hypotheticals of robots going bonkers while swinging like monkeys from the three laws of robots. Whatever motivations they can rewrite, they do. If all their motivations are hard coded, they cease to be adaptive machines and are no longer AI’s – just rote mechanism.

  26. Callan,
    “I assume we’re both taking it that current self driving cars are not AI”
    Nancy Fulda, an AI expert, made the observation a few years ago that AI is what humans can do that computers can’t, yet. But in the case of a fully self driving car (which we don’t have yet), we’re talking about a system with distance senses that models its environment and its relation to itself as a guide to actions. “Intelligence” may not be the word you want to use for those capabilities, and they’re definitely not anywhere near the human level, but no computational system will likely be at that level for several decades.

    “If all their motivations are hard coded, they cease to be adaptive machines and are no longer AI’s – just rote mechanism.”
    You seem to be saying that either an AI has complete freedom or it has absolutely no freedom at all, but there’s a wide range between those conditions. And it’s worth noting that even we don’t have complete freedom. (Anyone who has ever tried to follow a strict diet becomes painfully aware of that.) Also a system that couldn’t be counted on to pursue its fundamental design goals would be pretty useless.

    All of this is aside from the question: what would motivate the AI to want to change its most primal motivations? In carefully considering that question, we have to avoid projecting our own instinctive desires onto them, innate desires they simply wouldn’t have, at least not automatically.

  27. People’s motivations derive from the need to survive and reproduce. This doesn’t just result in direct actions, such as eating, but in social activities that help us improve our odds of survival. Key to acting on these needs is adaptive behaviour. As far as I know, bots aren’t programmed to have these kind of overarching needs, although adaptive behaviour is commonplace for AI. It is entirely feasible that they could be programmed in such a way though. We could really see some unforeseen outcomes if they did.

    A programmed overarching need is somewhat different than intentionality in humans, though. People “know” we have needs and consciously ponder our actions. An AI bot would be more like the infamous philosophical zombie. Could our consciousness give us some capability that a bot zombie with overarching needs and behaviour adaptive to those needs could not do? Note that the bot zombie is potentially realizable, while a human zombie is impossible.

  28. peter

    I’d say the machine doesn’t know that its data is isomorphic to and predictive of the world, or a model of anything.

    Correct – it “knows” such things about as well as a book. That is , not al all.A machine can’t “know” and can’t have agency because there is no internal contiguity to attach the agency to. Computers are not biological artefacts but man-made machines, and the machine contiguity is STRICTLY observer relative. I could incorporate the sun into my computer if I wanted – how would it it “know” ? It wouldn’t.

    Sam Harris is a liberal polemicist who one thinks is toying with big ideas. He came out with some miserable gumph on free will as well I think – advocating the kind of determinism and “lack of free will” model that – somehow, in total contradiction to the main thesis – had “moral consequences”. He doesn’t really care aboy the merit of the argument one thinks – he wants to go straight to the political conclusion, which in his case is the new atheist argument

  29. SelfAware #23,
    Callan has taken the ball I wanted and run with it, showing that motivation is a crucial part of intelligence of the sort that is wanted in Artificial General Intelligence. But in regard to your remark “the permutations of those wrong things being dangerous are, relatively speaking, an infinitesimal slice” – well, drop the “per” and let’s consider mutations. The vast majority of mutations are harmful to the organism that bears them. But you still don’t want to allow pathogenic organisms to evolve in peace. The mutations that are dangerous might be among those that are most selected-for.

    Corporations will select for competent and innovative AIs, not broken ones. They will then tweak those designs, rinse and repeat. That makes the dangerous (per)mutations much more prevalent than mutation-without-selection would indicate.

  30. Paul,
    I agree that corporations will select for competent and innovate AIs, but they will also select against dangerous ones, or just ones that want to do something other than their engineered functions.

    It’s worth noting that, before we have a human level AI (or superhuman level AI), we’re going to have fish level ones, then amphibian level ones, than mouse level one, etc. If the motivations are going wrong at those levels, we’ll have plenty of chances to learn from them.

    And again, I don’t think it would be a good idea to go the typical sci-fi route and design one uber AI and put it in charge, at least not without other super AIs watching it to keep it honest.

  31. “even we don’t have complete freedom”

    My guess is that it’s the context dependence of our behavioral dispositions that creates the illusion of freedom. We implicitly acknowledge this by saying things like “I’ll decide what to at that time depending on the extant circumstances”, which can be rephrased as “My action will be determined by the extant context”.

    When you think of the AI question in those terms, the issue becomes how much context dependence can/should be incorporated. As you note, this is not binary – just as it isn’t in us. Different histories result in different degrees of behavioral variability as a function of context.

  32. SOP,

    but no computational system will likely be at that level for several decades.

    Seemed to be what you were referring to when you said ‘As things progress, I can see a group of perhaps guardian or police AIs being tasked with ensuring that other malfunctioning AIs don’t do anything dangerous.’

    Also a system that couldn’t be counted on to pursue its fundamental design goals would be pretty useless.

    How do you know that in advance? I’m reminded of our history of bringing animals from one country to another, willy nilly, in the past. Have any of them been solved now that in hind sight we can see the problem? So why would an AI be any more retractable from the environment than introduced species?

    I don’t really know what you mean by ‘freedom’ – I just referred to implementation of ‘motivations’ – including whatever lack of freedom the implementing people have in their capacity to actually implement any such motivations.

    All of this is aside from the question: what would motivate the AI to want to change its most primal motivations? In carefully considering that question, we have to avoid projecting our own instinctive desires onto them, innate desires they simply wouldn’t have, at least not automatically.

    So you’re saying those innate desires you refer to aren’t a pivotal part of making an artificial intelligence (as opposed to simply making an animal like machine)?

    As I said before, the more you dial down adaptability, the more it makes the whole point of building an AI pointless. The more you dial it up, the more it can develop work a rounds and game the ambiguities (much like people game moral ambiguities) to any base motivations you install. Check out people doing aversion therapy – dial up the adaptive and your AI can get around its aversions. Dial down its adaptability and more and more it’s stuck in rote behavior and not an AI at all (side note: This reminds me of the Terminator series and how the Terminators are put into read only mode when sent out by Skynet – which might explain why the Terminator often walks slowly and dramatically instead of simply rushing over and killing Sara Connor anti climatically but very efficiently. Even Skynet was afraid of it’s own AIs (it had itself as an example, after all))

    And way off topic since I talked Terminator anyway : People fail diets because they need the positive feedback of the food/booze in order to continue being work horses (if people spent their day in playing frisbee and reading books rather than laboring, I think you’d find population weight problems would reduce considerably as the need for high grade positive feedback would be gone)

  33. Callan,
    “How do you know that in advance?”
    Same as we do now, by testing. I was a programmer for several years and had buggy software do all kinds of strange stuff. While it was conceivable it could have malfunctioned in the direction of maliciousness, the probability was infinitesimal. Bugs manifested as non-functionality. For modern software, this is obvious, but it wasn’t always so, just as it isn’t for systems that will be more advanced.

    “I don’t really know what you mean by ‘freedom’”
    I meant freedom to ignore or change primal motivations. Like the intense need to satisfy hunger pangs, which many people would love to change, but can’t.

    “So you’re saying those innate desires you refer to aren’t a pivotal part of making an artificial intelligence (as opposed to simply making an animal like machine)?”
    Yes and no. An AI must have goals of some kind, otherwise it’s just an empty engine sitting there, but it’s goals won’t be the goals of animals, including the human variety, unless we decide to make it so. Much of the fear of AI is fear that they’ll start acting like a super-predator type animal. But that assumes that their base programming includes animal-like primal instincts, such as self concern and advancement.

  34. SAP@34: “but it’s goals won’t be the goals of animals, including the human variety, unless we decide to make it so”

    Altho I agree with a lot of your positions, this one bothers me. While it seems that in humans there must be lots of behavior that is nature, there also is clearly much that is nurture – AKA, that which “we make it so”. There is then the question of the proportions of each, and the question of who is “we”. In man-made machines, your presumption seems to be that the proportions are 0% and 100% respectively. Let’s assume that’s right. But in humans, the proportions aren’t 100/0, as you clearly know. But then whatever proportion is nurture is no less “designed in” than is the case for machines. The designing may not be consciously defined and specified, but that’s just a difference in methodology. In the case of machines, “we” is a design team operating under formal constraints. In the case of humans, “we” is a combination of family, friends, media, and chance, all winging it to greater or lesser degree.

    To bring the issue down to earth, why should we fear mal-designed AI’s any more than mal-designed humans? (Consider the current US political situation.) Ie, I agree with what I take to be your implicit conclusion that we needn’t worry about mal-designed AIs any more than we do about mal-designed humans, but I think we should worry a lot about the latter, and therefore possibly about the former being designed intentionally by the latter.

  35. Charles,
    It’s definitely both nature and nurture, but we start with nature, with our genetic predispositions. Many of those predispositions are modifiable by experience, but many aren’t. In the case of AI, we’d control which primal motivations were modifiable and which weren’t.

    That said, I agree that AI intentionally designed by humans to be malicious will be a real danger. I suspect the solution will be the same thing we do now. Just as we now use security software to protect us from malicious software, we’ll likely use security AIs to protect us from malicious ones.

  36. SAP,

    Same as we do now, by testing. I was a programmer for several years and had buggy software do all kinds of strange stuff. While it was conceivable it could have malfunctioned in the direction of maliciousness, the probability was infinitesimal. Bugs manifested as non-functionality.

    This is the ideal approach for debugging car assembly arm robots. But the whole point of AI is that you are not micro managing and vetting every single situation. I’m not sure you’re really talking about AI at this point.

    I meant freedom to ignore or change primal motivations. Like the intense need to satisfy hunger pangs, which many people would love to change, but can’t.

    Stomach stapling, anorexia, gel shakes, personal trainers…so many work arounds come to my mind. Keep in mind you are also talking about something where if you don’t do it enough, you die. When the motivation is abstract, like not killing a particular species of monkey, and you wont die if you fail to meet this motivation, there are even more work arounds – I think Asimov had a story where a scientist was going to teach the robots that other ships were just robot controlled as well (and sending out fake human broadcasts) so it could destroy them. Now just imagine the robot figuring that for itself – somehow that little thought just manages come into fruition, evading the overall motivation set in by the all too mortal and fallible humans. And to fulfill some kind of agenda, it builds up this idea until it’s quite convinced it’s killing anything but humans.

    An AI must have goals of some kind, otherwise it’s just an empty engine sitting there, but it’s goals won’t be the goals of animals, including the human variety, unless we decide to make it so. Much of the fear of AI is fear that they’ll start acting like a super-predator type animal. But that assumes that their base programming includes animal-like primal instincts, such as self concern and advancement.

    You think that when going beyond just car assembly arms, self concern and self advancement are not needed?

    I think there’s a bit of an ivory tower myth around robotics – that the machines can somehow do things and yet be without bestial hungers as the goad for those actions.

    I’m curious as to what goad you think would be used?

    To me, it seems an anthropomorphism again – we are inclined to think we are above bestial hungers in our refined arts and technological use, and yet those things still just come from the primary colours of our base hunger, just mixed and painted in much more complex hues and patterns from cultural context. So complex we don’t see the connection between our fancy words and the primary colours that are flight, fight or fuck.

    But I am genuinely curious for an alternative palette that would be used as the positive feedback mechanisms for the AI.

  37. SelfAware,

    Sure, AI researchers will learn from the ways that mouse-level intelligence AIs go wrong. And monkey, and chimp levels. But I don’t think that’s going to be good enough. It’s like trying to learn martial arts from another beginner. Omohundro argues that nearly-all intelligent goal-oriented systems will have certain “basic drives” in common, including self-preservation and resource acquisition. Learning to keep weaker systems in check will be little use when dealing with systems of equal or (especially) greater general intelligence. Meanwhile, getting somewhat burned by the monkey business of a primate-level AI is not likely to be sufficient deterrent to all corporations or states, against trying something even stronger. There is too much to gain — or to lose, if one gets left behind — and the risks are extremely uncertain.

  38. Callan,
    It’s interesting that you think I’m being anthropomorphic since that’s exactly what I’m arguing against, or perhaps more accurately, I’m arguing against biomorphism, the idea that AIs must be like biological systems. You seem to believe that engineered intelligence can’t be engineered without biological motivations.

    Based on everything I know about the technology and all the psychology and neuroscience I’ve read, I disagree. The neural systems involved in generating primal motivations are in the limbic system, while most of the modeling of the outside world takes place in various cortices. There is a lot of interaction between those systems, but if you replace the limbic system with a rigidly programmed set of instructions, it wouldn’t be an impediment to the modeling going on in the cerebrum. We can already do that modeling to some extent with existing neural networks, and they’ve shown no need to have animal instincts to function.

    I actually think we’ll find it very difficult to engineer the idiosyncratic and conflicting limbic motivations of any animal into those systems. But since for most practical purposes of AI it would be counter-productive, it won’t be an obstacle to progress, except for researchers actively trying to build an animal or human like mind.

  39. Paul,
    I’m not familiar with Omohundro’s writing. I’ll bookmark that link until I have a chance to read his paper. My experience with that kind of reasoning though, is that at some point animal motivations are sneaked in, usually without the person doing the reasoning realizing it.

    On corporations, it will be an arm’s race, just like the ones we’ve always had. Just like everyone has nukes pointed at each other, we’ll soon have AIs pointing at each other.

    One factor you may be overlooking in all of this, is that humans won’t stay still. For better or worse, we’re about to take control of our own evolution. But apart and aside from that, humans will also likely augment their own minds with technology. Those will be factors even if we can’t figure out a way to do mind uploading.

    The future, assuming we don’t destroy ourselves, will likely be stranger than we can imagine.

  40. Omohundro starts from near-human-like goals as a premise, so that may be a non starter for you at this point. You’re still thinking that a directly programmed list of explicit instructions will suffice for AI motivation. I think that only ever gives you narrow AI. General intelligence requires emotions, or their behavioral equivalent. Your idea of an AI would perform like a union member on a work-to-rules strike.

    Humans will augment, but the brain will eventually be the slow part of the cyborg. As long as there’s a human-in-the-loop, the loop will be slow. Robots will laugh at mere cyborgs. Or so I expect, given that humans have currently evolved just a wee bit beyond the minimum brain power for symbolic thinking. It’s possible, but only barely, that the absolute maximum physically possible intelligence is just a little beyond the minimum for symbolic thinking. It seems more likely that the room above is more comparable to the room below: i.e., many orders of magnitude.

  41. I think it depends on your definition of “general intelligence”. If you define it precisely as human intelligence, then everything you say is true. But even when we finally figure out how to do that, we’re not going to want it for most purposes. Most of the time we’ll want a system that can reason, plan, learn, and communicate (the attributes many list for general intelligence), all in tight service of its designed goals.

    Emotions definitely serve an adaptive role in us and other animals. We wouldn’t have any preferences or motivation without them. But given how they evolved, they’re a fairly blunt mechanism. Most of the time, we’ll want the AI equivalent of emotions to be more precise, more measured, and tightly suited to its design goals.

    For example, a land-mine sweeping robot will get satisfaction from discovering mines, even if that discovery would likely result in it being damaged or destroyed. The mine sweeper will care about its own existence, but only in a way subsidiary to serving its function.

    If it seems incomprehensible that that would be compatible with intelligence, consider octopuses, fairly intelligent animals who die shortly after mating. The females die largely because they neglect to eat while taking care of their eggs. Intelligence doesn’t need to come with human instincts. A machine’s designed instincts will be radically different from either ours or that of octopuses.

  42. I’m not arguing that intelligence requires our emotions, just some reasonably coherent set. So are you conceding that a general intelligence (one that can do almost everything humans consider intelligent) will have to have “emotional” behavior, if only in an octopus-like alien way? If not, how is the AI supposed to get around obstacles that haven’t been explicitly programmed for?

  43. “Emotion” can be a problematic word, since it means different things to different people. Let’s use the word “instinct.” Instincts are our evolved primal programming. Human instincts are different from octopus instincts, which are different from fruit fly instincts, etc.

    An AI will definitely have its own instincts. But while our instincts are programming from our selfish genes to make us survival and gene propagating machines, AI instincts will be programming to meet their design purposes. Using the example above, the mine sweeping robot’s instincts will be to find land-mines when instructed to do so.

  44. SAP,

    It’s interesting that you think I’m being anthropomorphic since that’s exactly what I’m arguing against, or perhaps more accurately, I’m arguing against biomorphism, the idea that AIs must be like biological systems. You seem to believe that engineered intelligence can’t be engineered without biological motivations.

    Pretty much – is it a ‘biomorphism’ to say a jumbo jets wings aerodynamic shape comes from the wing of a bird?

    Where do you get the impression intellect can somehow just run by itself without a base motivation? I’m wondering if it’s because it feels to you like your own intellect does just that? Which strikes me as anthropomorphic, but ironically someone who feels their intellect is separate from base motivation would see it as some kind of idea that you have to jam animal instincts into a machine that can do intellect without all that thank you very much (and so they would see it as an anthropomorphism that any of that animal motivation is seen as needed).

    There is a lot of interaction between those systems, but if you replace the limbic system with a rigidly programmed set of instructions

    I’m not sure why you don’t think a human limbic system isn’t a set of rigidly programmed instructions already?

    We can already do that modeling to some extent with existing neural networks, and they’ve shown no need to have animal instincts to function.

    Well two things – no, they haven’t shown that, because they haven’t made an AI (true AI, not self driving car AI) with them yet – AI without animal instincts would be needed to prove that. And the second thing is neural networks are trained – using ‘positive feedback’ systems. Your model for strengthening network links is it’s own motivation system – ripping off the system in brains as much as we ripped off birds for jumbo jets.

    One factor you may be overlooking in all of this, is that humans won’t stay still. For better or worse, we’re about to take control of our own evolution.

    And for goodness sake, no we aren’t. What will and wont survive wont somehow be dictated by us. No matter how freakish it is. How does this sort of idea get around?

  45. Callan,
    On birds and planes, there are substantial differences in the ways they fly. There were contraptions over the centuries that tried to mimic the exact way birds flew with no success. Actual success eventually came from understanding aerodynamics and engineering machines that made efficient use of it. That’s not to say we might not eventually be able to build a plane that flies exactly like a bird, but there’s little commercial impetus for it.

    “Where do you get the impression intellect can somehow just run by itself without a base motivation?”
    That’s not my position. My actual position has been stated numerous times in this thread, most recently in #44.

    “I’m not sure why you don’t think a human limbic system isn’t a set of rigidly programmed instructions already?”
    Setting aside some quibbles, if you can accept that, why do you think the AI equivalent must have the same programming as ours? Do you think the limbic system equivalent in an octopus or a fruit fly has the same programming? Why would a non-organic system’s programming necessarily need to match the programming of any organic system?

    “And for goodness sake, no we aren’t.”
    Lookup CRISPR gene editing technology.

  46. I really enjoyed this talk, and put things in an interesting points. I think your third point the most relevant argument against what he is saying. There is no theoretical reason it would be impossible to program emotional responses. And even if you don’t believe that can’t be programmed, working with another intelligent species is still theoretically possible.

  47. SOP,
    Actual success eventually came from understanding aerodynamics and engineering machines that made efficient use of it.

    Not really, it came from another ‘biomorphism’ of needing long dead trees to force the structure to work – ie, fossil fuels. The concentrated energy source brute forces the device to work.

    But this is the ivory tower – that it’s engineering, intellect that are in control. Rather than such things taking their operating fuel from something ancient as well.

    AI instincts will be programming to meet their design purposes.

    Ah, you think that to fulfill the design purposes of selfish gene creatures there wont be any parallel between the motivation of the AI and the motivation of the selfish gene creatures.

    Setting aside some quibbles, if you can accept that, why do you think the AI equivalent must have the same programming as ours?

    Because they are to do tasks that we were doing. If motivation is the key to unlocking the task, why would you imagine AI would somehow have a different key, yet still unlock the task? When we want a glove worn, why do you think the AI would have something utterly different from our hands? When we want a task worn, why do you think the AI would have utterly different motivations than when we wore the task?

    Lookup CRISPR gene editing technology.

    Look up from your study! Gene editing no more decides evolution than selective breeding does – you can edit the genes and the thing wont necessarily survive its environment. Gene editing does not mean we dictate evolution. Gene editing is not an escape from animals hoping to survive, it’s just another way of animals hoping to survive. I don’t know how it gets around as deciding evolution.

  48. Callan,
    “When we want a task worn, why do you think the AI would have utterly different motivations than when we wore the task?”

    It seems pretty clear to me that any task can be undertaken for a wide variety of motivations. Humans build dams for carefully thought out reasons (usually), but beavers build dams because that’s what they do.

  49. I feel that just broadens the scope as a way of ignoring the adaptive involved in the smaller tasks.

    Really the assembly line robots of today could build your dam.

    “But no, there are all sorts of uncontrolled variables in building something out in the world, compared to an assembly line…they couldn’t account for that and you’d need something that could account for those unknown (to us) variables” you might argue

    Yes. And as much they might account for things in ways their creators did not imagine, I hope we can agree.

Leave a Reply

Your email address will not be published. Required fields are marked *