Meh-bots

Do robots care? Aeon has an edited version of the inaugural Margaret Boden Lecture, delivered by Boden herself. You can see the full lecture above. Among other things, she tells us that the robots are not going to take over because they don’t care. No computer has actual motives, the way human beings do, and they are indifferent to what happens (if we can even speak of indifference in a case where no desire or aversion is possible).

No doubt Boden is right; it’s surely true at least that no current computer has anything that’s really the same as human motivation. For me, though, she doesn’t provide a convincing account of why human motives are special, and why computers can’t have them, and perhaps doesn’t sufficiently engage with the possibility that robots might take over the world (or at least, do various bad out-of-control things) without having human motives, or caring what happens in the fullest sense. We know already that learning systems set goals by humans are prone to finding cheats or expedients never envisaged by the people who set up the task; while it seems a bit of a stretch to suppose that a supercomputer might enslave all humanity in pursuit of its goal of filling the world with paperclips (about which, however, it doesn’t really care), it seems quite possible real systems might do some dangerous things. Might a self-driving car (have things gone a bit quiet on that front, by the way?) decide that its built-in goal of not colliding with other vehicles can be pursued effectively by forcing everyone else off the road?

What is the ultimate source of human motivation? There are two plausible candidates that Boden doesn’t mention. One is qualia; I think John Searle might say, for example, that it’s things like the quake of hunger, how hungriness really feels, that are the roots of human desire. That nicely explains why computers can’t have them, but for me the old dilemma looms. If qualia are part of the causal account, then they must be naturalisable and in principle available to machines. If they aren’t part of the causal story, how do they influence human behaviour?

Less philosophically, many people would trace human motives to the evolutionary imperatives of survival and reproduction. There must be some truth in that, but isn’t there also something special about human motivation, something detached from the struggle to live?

Boden seems to rest largely on social factors, which computers, as non-social beings, cannot share in. No doubt social factors are highly important in shaping and transmitting motivation, but what about Baby Crusoe, who somehow grew up with no social contact? His mental state may be odd, but would we say he has no more motives than a computer? Then again, why can’t computers be social, either by interacting with each other, or by joining in human society? It seems they might talk to human beings, and if we disallow that as not really social, we are in clear danger of begging the question.

For me the special, detached quality of human motivation arises from our capacity to imagine and foresee. We can randomly or speculatively envisage future states, decide we like or detest them, and plot a course accordingly, coming up with motives that don’t grow out of current circumstances. That capacity depends on the intentionality or aboutness of consciousness, which computers entirely lack – at least for now.

But that isn’t quite what Boden is talking about, I think; she means something in our emotional nature. That – human emotions – is a deep and difficult matter on which much might be said; but at the moment I can’t really be bothered…

 

21 thoughts on “Meh-bots

  1. It strikes me as a little ironic to seek salvation in (robots) not caring when it’s exactly not caring that has brought us climate change, the biodiversity crisis, and Donald Trump…

    But enough with the moralizing. I agree with you about the ability to project into the future being a central component of human motivation—I think Rilke said it best: “The future enters into us, in order to transform itself in us, long before it happens.”

  2. Hmmm…

    No doubt Boden is right; it’s surely true at least that no current computer has anything that’s really the same as human motivation.

    No doubt? Surely? I would argue that a Roomba’s desire to attach to a power outlet when its power runs low is exactly analogous to the human motivation of hunger. Similarly its desire to avoid cliff edges.

    While Boden clearly suffers from the inability to imagine that computers/AI may gain abilities they do not currently have, I’m glad to see Peter does have that same shortcoming.

    I disagree with Peter when he suggests qualia may be the source of human motivation. While qualia certainly play a part of the causal story, that part is not the source. Instead, qualia are measurements used to compare the current state with the desired state. The qualia of hunger are the recognition of the discrepancy between the desired state of satiation and the current state. I guess this discrepancy can be the source of the generation of a goal to attain food, but I’m guessing this indirect source is not what Peter meant when he said qualia might be the source of human motivation.

    Boden seems to rest largely on social factors, which computers, as non-social beings, cannot share in.

    Again, the confusion between how current AI works as opposed to how AI could work. Even now there are AI’s learning how to cooperate with each other. The next phase in the “OK, maybe computers can do this better than humans, but they can never do that better” seems to be AI’s playing as a team to beat human teams in Dota 2 (Defense of the Ancients. I had to look it up.) Never say never.

    That capacity [to imagine and foresee] depends on the intentionality or aboutness of consciousness, which computers entirely lack – at least for now.

    The capacity to imagine and foresee does depend on the intentionality of consciousness, but it also requires something in addition. Computers already have the intentionality of consciousness. What they lack is the ability to think in terms of causality. As Boden mentions, current AI pretty much relies on statistical, possibly Bayesian, methods. As Judea Pearl points out in his Book of Why, statistical methods are not enough to ask counterfactuals, i.e., to imagine what would happen if I did x instead of y. But there’s no reason to think AI can’t get there.

    *

  3. “The question of whether Machines Can Think… is about as relevant as the question of whether Submarines Can Swim.” –Edsger Dijkstra

    I think that by defining feelings narrowly such that only social and biological beings can have them, Boden risks turning the question of robot feelings into a can-submarines-swim question. Even if we don’t call robots’ intelligent flexible promotion of goal-states “motivation”, that submarine can still sink your boat – or even your civilization, maybe.

    Not that I think robots will take over the world directly. More plausibly, corporations could take over the world (they’re well on the way) while employing more and more robots. Eventually they might kick away the human ladder that they climbed to get to the top.

  4. James of Seattle:

    I would argue that a Roomba’s desire to attach to a power outlet when its power runs low is exactly analogous to the human motivation of hunger. Similarly its desire to avoid cliff edges.

    Is it, though? I mean, one could say that the desire to reach the bottom of the rock tumbling down the hill is analogous to human desire, but that don’t make it so. The rock doesn’t really have desires; it merely follows the physical laws and interactions with the environment (as does everything, of course).

    Or take a thermostat: does it desire to raise the temperature, if it falls below a certain point? It seems obvious to me (but maybe you want to disagree) that it doesn’t: it’s merely a simple control circuit. It’s not ultimately different from the stone in that sense: interaction with the environment triggers a state change, and that’s physically hardwired into the heating system. Current flows, the room heats up.

    But the Roomba really is just an exceedingly clever arrangement of such systems. The environment and its own state flip switches; those switches trigger actuators and the like. Not different from your lamp giving off light once switched on, but does it really make sense to say that the lamp wants to provide illumination after receiving the appropriate stimulus (i. e. the switch being flipped)? The world prods it (for instance, in the guise of a voltage drop in its batteries), and that prodding triggers a reaction (a certain pattern of movements, determined by further proddings triggering additional reactions and the like). ‘Wanting’ enters into this at best metaphorically.

  5. That physicists and philosophers would never never again infer cause to be a singularity…

    Cause is duality and it’s effect just another duality…
    …this is the “motive” of entities in Ontology, and maybe helps ‘pure motives triples’ of Algebraic Geometry…

    Today the motivation to stay with and study one’s emotions could come from the idea…
    …”sensation, emotion and mentation are independent entities in oneself”…

  6. I agree with Boden that, without human or animal motivations, robots aren’t going to be interested in taking over the world or subverting their programmed goals. Their motivations are going to be what we programmed into them.

    Yes, those goals might cause unforeseen consequences, but that’s something every programmer already has to deal with. It’s why new software spends a lot of time in beta. (I will concede that imagination (scenario simulations) would complicate the picture, but testing systems would also increase in sophistication, likely with their own imaginations.)

    People often respond by talking about how much more intelligent than us the machine will be, and how capable it will be of subverting goals and safeguards. But this is forgetting the first point above, that they won’t have any motivation to get around those programmed goals.

    I do disagree though that machines *can’t* care. (In the Aeon piece, I don’t recall Boden saying this explicitly, but it does seem implied.) It’s more that we’d have to program them to care. It’s not going to happen automatically or accidentally. Consider that the device you’re reading this on has more processing power than the nervous system of a c-elegans worm. Has it shown any tendency to act like a worm? Why then would we assume at a certain point that systems would start acting like any particular vertebrate species?

    I don’t doubt that there will be some experimental systems with creature-like desires, but in most cases it’s not something that will generally be productive in automated products, so I doubt they’ll be common. Why build systems that desire anything other than what we want them to do?

  7. Pingback: Meh-bots – Health and Fitness Recipes

  8. Jochen, actually, after some thought, the rock rolling down the hill is analogous. But some things are more analogous than others.

    What all the examples share is a physically explainable tendency to reach a specified state. But each example will have different features, especially with regard to how the “specified state” was specified. The rock’s “intended state” is simply specified in the laws of physics. A whirlpool’s specified state is specified by physics and the rules of homeostasis. A bacterium’s specified state is specified by the above plus natural selection. A dog’s specified state of satiation is specified by all of the above plus a subsystem which triggers feeding behavior under the right circumstances. A human’s specified state of physical fitness is specified by all of the above plus a subsystem that can generate arbitrary goals (like a certain level of physical fitness) which can override the other subsystems. A thermostat’s specified state is specified by an arbitrary goal of a system like a human. A roomba’s specified state of maintaining a certain charge level is likewise specified by a human’s arbitrary goal, but it’s behavior of finding and moving to an outlet represents a state specified by an internally generated goal (proximity to charging device) which was self-generated.

    So I think Dennett’s “intentional stance” may not be so much about thinking that the rock’s action is kinda like having a human-style intention, but instead that all these things have a similar property, tending toward a specific state, and when we talk about those things in humans we talk about desires etc.

    *
    [and yes, I’m saying that desires in humans are merely parts of control circuits.]

  9. SelfAwarePatterns, you said “Their motivations are going to be what we programmed into them.”

    The problem is that soon we will program them to create their own abstract subgoals. For example, we might program an AI to make stock trades based on national news faster than the other AI’s. Our AI might decide that one way to beat the other AI’s is to shut down the power grid where those AI’s are sourced.

    The point is, without direction, AI’s don’t know that cheating is cheating. We need to include goals like avoid breaking laws and avoid damaging things, especially live things, especially human live things, etc.

    *

  10. James,
    As I conceded above, imagination (which developing sub-goals requires) does complicate the picture. Imagination dramatically increases the repertoire of available actions for a system. Testing is going to be required.

    That said, it’s worth noting that a machine won’t suddenly have the full imaginative power of a human. Long before we have systems with that capability, we’ll have systems with the imaginative power of a fruit fly, later a fish, a mouse, etc. We’ll have plenty of time to work out testing scenarios, using testing systems that will themselves be getting more sophisticated.

  11. SelfAware,
    I agree that “we’d have to program them to care” for machines to care. But that seems pretty likely, as long as you have a functionalist definition of “care”. After all, there’s only been one proven way so far to get the highly flexible, goal-promoting behavior that many engineers want. It’s the way natural selection hit upon, and it definitely involves caring.

  12. Paul,
    It may depend on what you mean by a “functionalist definition of ‘care'”. If you’re saying that we’ll only get the desired behavior if we give them the same motivations as evolved systems, my response is that natural selection produces genetic survival and propagation systems. Our cares are tuned to that selection pressure. But I can’t see any reason why an engineered machine would need to be encumbered by those types of concerns. Once only evolved systems could calculate ballistics tables, play chess, or recognize faces, but today all of these things can be done by engineered systems without any biological cares.

    (Of course, if we build self replicating machines, and leave them alone for long enough, I’m sure natural selection would kick in and eventually produce systems with cares similar to organic ones.)

  13. SelfAware,
    All I mean by “functionalist care” is that a system represents goal states, more-or-less explicitly. The system might have the ability to evolve its goal states over time, for example through a reward system, or (some) goals might be (relatively) hard-wired in. But either way, the system can compare goals to actual or prospective (imagined future action-consequence) conditions. And then it will be drawn toward the more goal-fulfilling options. That is the one and (so far) only way we know to get intelligent flexible behavior.

    Nothing about this implies that the system has to explicitly value survival and reproduction. But, by the way, most humans don’t value reproduction per se, as seen by their attitudes toward contraceptive use. And as for survival, I think we will want our machines to take care of themselves, although that will sometimes be a lower priority than other goals. Asimov’s 3 laws of robotics come to mind here.

  14. On second thought, I want to add another condition for “caring”. As well as explicit goals, the system needs to be able to compare all its goals in a big-picture way, not just follow one particular goal in one particular situation and ignore the rest. So the sphex wasp, which drags its prey to the nest opening and goes to inspect the nest, over and over again every time the human experimenter moves the prey a few inches, doesn’t care. At that rate, its young will never get fed. By contrast, mammals typically care about their goals: they often weigh them against each other. This “big picture comparison” idea may be fuzzy, and may leave it indeterminate whether, say, reptiles care – but that’s OK.

  15. Paul #13 and #14,
    On humans not valuing reproduction, this is an important distinction that I often omit because explicitly saying it every time would make discussion tedious and legalistic.

    Our instinctive impulses arose for adaptive evolutionary reasons, for reasons that enhance our genetic legacy. But those aren’t our personal reasons. For us, the reasons are simply the impulses themselves. The evolutionary reason we enjoy sex is because it encourages reproduction. But we personally want to have sex because we enjoy it. Birth control allows us to satisfy the impulse while frustrating the evolutionary reason. (And yes, we could go off on a tangent about whether the word “reason” is appropriate here. I’m using it in a teleonomic rather than teleological sense.)

    We probably will want machines to take care of themselves. But not at the cost of protecting humans. So a self driving car would have a much stronger impulse to sacrifice itself to avoid harming humans rather than save itself. And a mine sweeper would value finding a mine far above preserving its own existence.

    I actually think imagination (the scenario simulations you were discussing) arose in large part to deal with conflicting instinctive impulses. When those impulses are not in conflict, imagination doesn’t need to get invoked, and we often act “without thought.” Imagination allows us to simulate various action scenarios to choose the actions that most satisfy our impulses. I think it’s the foundation of volition.

  16. I don’t say this very often, but this is a deeply ignorant (and DANGEROUS) argument and should be dismissed outright.

    There is an enormous mass of literature on how to properly program value functions for our nascent AIs (which can beat experts at DOTA now, by the way). We may never be able to figure out if artificial minds have qualia or not, but it won’t matter one wit to whether they “care” about things or not (I suspect the two are linked, but that might be an incorrect intuition). As long as electrons “care” about moving towards a positive charge, our machines will care about the thing we program them to care about.

  17. Selfaware,

    I really don’t think you have to program in a motivation to get around goals for it to happen. You could argue it’s statistically unlikely a work around will happen. But a motivation doesn’t have to be programmed in…or otherwise we wouldn’t exist. It can just happen.

    Plus some of the ways neural networks solve problems have been pretty weird. Some of their chess moves or go moves are just non intuitive. Asimov made money writing stories of robots reading their goals in weird ways…it’s not a work around to those goals, but if the weird way they are read causes the sorts of things we get up in arms about, does the distinction matter?

  18. Callan,

    “But a motivation doesn’t have to be programmed in…or otherwise we wouldn’t exist. It can just happen.”

    I think it’s important to remember that our motivations are the result of billions of years of natural selection, in other words, programming by nature. I did note above that if we engineered self replicating systems and left them alone, eventually we might get something similar to creature motivations. But evolution isn’t inevitable progress toward us, and we’re more likely to get systems very good at surviving but not necessarily highly intelligent ones.

    Unforeseen consequences can be a concern, but most of them will simply result in non-functional behavior. A system acting like Ultron or Skynet is an infinitesimal slice of the possible outcomes. I think we’re in much greater danger from what humans *intentionally* design these systems to do than from what they might decide to do on their own.

  19. SA,

    We don’t need to get to skynet levels – it’s an issue when a self driving car can’t identify an object and suddenly hands control over to it’s thoughtless driver, then the lady on the bike with the weird baggage is run over and killed. Someone being killed is enough. Or a shares learning algorithm that tanks a company and people lose their jobs.

    Frankly it reminds me of how in the past people would bring all sorts of non native species to a newly found continent. Then these creatures would go feral and kill off the native species that couldn’t cope with them. No one thought ahead on this, but here we are introducing a new species into our own environment, but we think we are in charge of it somehow. And when we get feral AI that acts in ways we didn’t expect and humans get run over or such, what then? The introduced species from the past usually just killed other species, not our own kind.

    AI doesn’t have to adapt to get around the inhibitors we put in – it just has to fall through the cracks in the inhibitors that are there because we are stupid. Whether it’s a motivation to avoid programmed goals or simply falling through the cracks inherent in those goals (and it may be tautological to say that – consider how using a condom avoids evolution’s ‘goal’ for making sex feel good), it seems everyone is keen to let in another invasive specie into the environment. But these ones will actually target us – we make a big fuss if a shark kills someone and demand shark nets. Are there any self driving car nets being put up?

  20. Callan,
    I don’t disagree with what you’re saying here, but I think now you’re talking about old fashioned bugs. Certainly those are going to be a problem, but they’re already a problem and have been for decades. It’s why most of us are cautious about self driving cars and things like autonomous weapons. These systems are going to have to prove themselves consistently before people are comfortable trusting them.

    But when evaluating these systems, I think we have to step back and put their flaws into perspective. Humans regularly make mistakes. Last year, about 40,000 Americans were killed in accidents involving human operated cars. An automated system doesn’t have to have a perfect record, just one that is comparable or better than a human’s.

  21. I’m really not talking about bugs, SAP – really the issue is that we don’t have a moral spectrum on this. It’s a bug when I write code where a while statement does not complete and locks the program. What is it when the woman on the bike is killed? A bug. That’s like describing it as a faux pas when someone farts or a faux pas when someone stabs someone else. We apparently don’t have the language for describing programming errors (or the initiation of learning networks that are insufficient) when the stakes of such errors aren’t just a program stuck in a perpetual loop.

    Yes, humans regularly make mistakes…and we have a language of accountability for those humans. So when the guy died while watching harry potter as his car drove itself into a truck it mistook for a skyline or with the lady on the bike with the baggage that made it hard to identify as such…what language do we have there? Oh, ummm, well, uhh…

    People being killed and everyone shrugs, not a single word of accountability? Dead. Bodies. And nothing said. Can I go kill someone today…oh no, there’s a furor if I do so. This lack of responce isn’t somehow normal, it’s a hole in our own intelligence. A bug.

    No, we clearly can’t cope with this invasive specie. But because of the Dunning Kruger effect we don’t have the skill to see with don’t have the skill to cope with this.

Leave a Reply

Your email address will not be published. Required fields are marked *