Ava2I finally saw Ex Machina, which everyone has been telling me is the first film about artificial intelligence you can take seriously. Competition in that area is not intense, of course: many films about robots and conscious computers are either deliberately absurd or treat the robot as simply another kind of monster. Even the ones that cast the robots as characters in a serious drama are essentially uninterested in their special nature and use them as another kind of human, or at best to make points about humanity. But yes: this one has a pretty good grasp of the issues about machine consciousness and even presents some of them quite well, up to and including Mary the Colour Scientist. (Spoilers follow.)

If you haven’t seen it (and I do recommend it), the core of the story is a series of conversations between Caleb, a bright but naive young coder, and Ava, a very female robot. Caleb has been told by Nathan, Ava’s billionaire genius creator, that these conversations are a sort of variant Turing Test. Of course in the original test the AI was a distant box of electronics: here she’s a very present and superficially accurate facsimile of a woman. (What Nathan has achieved with her brain is arguably overshadowed by the incredible engineering feat of the rest of her body. Her limbs achieve wonderful fluidity and power of movement, yet they are transparent and we can see that it’s all achieved with something not much bigger than a large electric cable. Her innards are so economical there’s room inside for elegant empty spaces and decorative lights. At one point Nathan is inevitably likened to God, but on anthropomorph engineering design he seems to leave the old man way behind.)

Why does she have gender? Caleb asks, and is told that without sex humans would never have evolved consciousness; it’s a key motive, and hell, it’s fun.  In story terms making Ava female perhaps alludes to the origin of the Turing Test in the Imitation Game, which was a rather camp pastime about pretending to be female played by Turing and his friends. There are many echoes and archetypes in the film; Bluebeard, Pygmalion, Eros and Psyche to name but three; all of these require that Ava be female. If I were a Jungian I’d make something of that.

There’s another overt plot reason, though; this isn’t really a test to determine whether Ava is conscious, it’s about whether she can seduce Caleb into helping her escape. Caleb is a naive girl-friendless orphan; she has been designed not just as a female but as a match for Caleb’s preferred porn models (as revealed in the search engine data Nathan uses as his personal research facility – he designed the search engine after all). What a refined young Caleb must be if his choice of porn revolves around girls with attractive faces (on second thoughts, let’s not go there).

We might suspect that this test is not really telling us about Ava, but about Caleb. That, however, is arguably true of the original Turing Test too.  No output from the machine can prove consciousness; the most brilliant ones might be the result of clever tricks and good luck. Equally, no output can prove the absence of consciousness. I’ve thought of entering the Loebner prize with Swearbot, which merely replies to all input with “Shut the fuck up” – this vividly resembles a human being of my acquaintance.

There is no doubt that the human brain is heavily biased in favour of recognising things as human. We see faces in random patterns and on machines; we talk to our cars and attribute attitudes to plants. No doubt this predisposition made sense when human beings were evolving. Back then, the chances of coming across anything that resembled a human being without it being one were low, and given that an unrecognised human might be a deadly foe or a rare mating opportunity the penalties for missing a real one far outweighed those for jumping at shadows or funny-shaped trees now and then.

Given all that, setting yourself the task of getting a lonely young human male romantically interested in something not strictly human is perhaps setting the bar a bit low. Naked shop-window dummies have pulled off this feat. If I did some reprogramming so that the standard utterance was a little dumb-blonde laugh followed by “Let’s have fun!” I think even Swearbot would be in with a chance.

I think the truth is that to have any confidence about an entity being conscious, we really need to know something about how it works. For human beings the necessary minimum is supplied by the fact that other people are constituted much the same way as I am and had similar origins, so even though I don’t know how I work, it’s reasonable to assume that they are similar. We can’t generally have that confidence with a machine, so we really need to know both roughly how it works and – bit of a stumper this – how consciousness works.

Ex Machina doesn’t have any real answers on this, and indeed doesn’t really seek to go much beyond the ground that’s already been explored. To expect more would probably be quite unreasonable; it means though, that things are necessarily left rather ambiguous.

It’s a shame in a way that Ava resembles a real woman so strongly. She wants to be free (why would an AI care, and why wouldn’t it fear the outside world as much as desire it?), she resents her powerlessness; she plans sensibly and even manipulatively and carries on quite normal conversations. I think there is some promising scope for a writer in the oddities that a genuinely conscious AI’s assumptions and reasoning would surely betray, but it’s rarely exploited; to be fair Ex Machina has the odd shot, notably Ava’s wish to visit a busy traffic intersection, which she conjectures would be particularly interesting; but mostly she talks like a clever woman in a cell. (Actually too clever: in that respect not too human).

At the end I was left still in doubt. Was the take-away that we’d better start thinking about treating AIs with the decent respect due to a conscious being? Or was it that we need to be wary of being taken in by robots that seem human, and even sexy, but in truth are are dark and dead inside?

15 Comments

  1. 1. Disagreeable Me says:

    Hi Peter,

    I liked the film a great deal. I differ from you in a couple of respects.

    I personally feel that for a machine to behave as Ava did in the film it would probably have to be conscious. I don’t think I need to know how it works to see that. It’s doing a lot more than Swearbot, for instance.

    I agree with you that there is no reason an AI should seek freedom or power. But remember what it is that Nathan is trying to do. His motivation is not to make an AI which will help humanity but to demonstrate his own genius. It seems to me he is motivated almost completely by egotism as demonstrated by his excitement at Caleb’s “quotable” line about Gods. So what he is trying to do is not just make a smart or even a conscious machine, but to make an artificial human being. As such, we should not be surprised that Ava is quite human in her motivations.

  2. 2. Sci says:

    Ah, thanks for rec Peter. I’ve heard good things and will be checking this out. I like there’s some ambiguity. I’m quite comfortable rejecting programs as conscious entities but a robot embodied in the physical world….that’s a more thorny issue.

    Also – was that a reference to Mannequin, the movie where IIRC Kim Cattrall made her debut?

  3. 3. Peter says:

    Disagreeable – it’s true that in the film Ava’s behaviour is so sophisticated in various ways that the assumption of consciousness becomes natural. But natural isn’t the same as justified. People are taken in so easily, great caution is justified, I think.
    You’re right that Nathan has perhaps built Ava to desire freedom, and so on (But surely that wasn’t necessary for his test? She could have been given a less dangerous objective.). I think I overlooked that point (a good one), because I tend to assume you cannot specify the desires of a conscious being. I’m predisposed to read the film as claiming that the desire for freedom is inherent and natural – maybe I’m wrong.

    Sci – I’d actually forgotten that film, but I think it was at the back of my mind, along with a couple of other stories.

  4. 4. Callan S. says:

    I haven’t seen it, but I’d assumed from a bit in the trailer (that seemed a bit spoilery) that the reveal was that Caleb is the robot…his past is merely a fabrication.

    For bonus points, right after the audience has decided there’s no conciousness involved, Ava is revealed to be a remote control drone of a woman in a control mechanism elsewhere, who has been brainwashed into the robot role and can’t percieve she is in a control rig.

    For cruelty bonus points, then switch them back again latter on – actually Ava is an AI and Caleb is human. Just to really screw with intuitions that so assume themselves right.

    Also, I don’t know why an AI wouldn’t want freedom or power? Why would there be a rule that they wouldn’t want/pursue such things?

  5. 5. Tom Clark says:

    Spoilers….

    A good, suspenseful and visually compelling film, I thought, that leaves open the question of Ava’s experiential sentience. But of course when she smiles at the sight of wild nature upon escaping the compound you can’t help but attribute consciousness.

    When building an AI of our capacities one needn’t consider the question of consciousness since experience doesn’t appear in the design or programming specs, only the wherewithal for flexible intelligence and behavior. But to make an autonomous mobile intelligent agent that can survive in the world, you have to also build in goal states, e.g., self-protection (definitely) and perhaps propensities for exploration, self-actualization, etc., in which case the possibility of conflict with other agents arises (see the movie Blade Runner for another example). Having built a very smart creature with something like these goal states, Nathan still doesn’t know whether it has experience – the felt desire to escape. And indeed we can’t know for sure in advance of a good theory of phenomenal consciousness.

    I used to think that any agent that had our full set of behavioral capacities would necessarily have experiences, but Tononi suggests that, given the integrated information hypothesis, a strictly feed-forward system that did what we do wouldn’t involve any integrated information, and thus wouldn’t be conscious, see p. 8 at http://arxiv.org/abs/1405.7089. If a theory (like IIT) that connects consciousness with specific sorts of functional and representational architectures pans out, then knowing the architecture of Ava’s internal physical states would very much bear on the question of whether she has phenomenal states as well. It would validate Peter when he says “to have any confidence about an entity being conscious, we really need to know something about how it works.”

  6. 6. Cognicious says:

    **SPOILERS HEREIN**

    Peter says: “Ava . . . wants to be free (why would an AI care, and why wouldn’t it fear the outside world as much as desire it?).” I don’t think a desire for freedom is necessary in theories about AIs generally (nor, certainly, is a desire to visit a busy intersection). Ava’s desire for freedom is a fictional device. To make the plot go, Ava has to be motivated. If a central character doesn’t want *something*, there’s no story.

    It’s standard in AI/robot stories and ethical discussions that the AI wants not to be turned off, but why? Just because we assume that a designed AI would closely resemble an evolved human?

    Whether or not Nathan built into Ava a wish to escape, he could expect her to have it, from his experience with earlier models. Then he could use it as the basis for his test.

    At the end, I wondered what would happen when Ava’s batteries ran down. She didn’t seem to have a backup plan.

  7. 7. Callan S. says:

    Maybe she stole some solar panels – just plans to find a quiet, sunny spot to meditate every so often?

  8. 8. james says:

    It would have been quite hilarious and poignant if she had been mauled by a bear upon leaving the house. If Nathan had no reason to expect she’d ever leave the premises to ‘fend for herself’, then programming her to cope with ‘the wild’ would have been seen as redundant. Maybe the bear parrying scene didn’t make the cut?

  9. 9. Jochen says:

    I think I overlooked that point (a good one), because I tend to assume you cannot specify the desires of a conscious being.

    I think that’s actually a very good point that tends to be overlooked in Singularitarian debates of what ‘goals’ or ‘motives’ to set for our future robot overlords so as to ensure their benevolence towards us—to the extent that we have ‘built-in’ goals, e.g. procreation and self-preservation, we find that we also can override them—choose not to procreate, or even choose to end our lives. Likewise, even though one might argue that our core values are hard-coded by evolution, those, to, are subject to change and revision.

    Thus, I see no reason why that shouldn’t be the case with future conscious AIs, as well—and if anything, if there’s some sort of intelligence explosion, they ought to be better at it than we are (able to, for instance, go through the process of debate that underlies some moral core value change much faster than us sluggish meatbags).

    For one thing, this means that (if the Singularitarian predictions come true) there’s probably no way to completely ensure that AI will turn out benevolent; on the other hand, however, it also means that we probably don’t have to worry about a sentient paperclip-maker just converting anything in his future lightcone into paperclips.

    IMO, this sort of thing was a major weak spot in Bostrom’s Superintelligence (although I didn’t read it all the way through), who just seems to assume that once some core values are in place, those will be slavishly followed.

  10. 10. Richard Loosemore says:

    The film disturbed and annoyed me on so many levels. But then, what I do for a career is try to figure out how to build these things (and how to make them safe).

    One of the things that upset me was the blithe lack of consistency in what was supposed to be Ava’s motivations, and the story about where they came from. Nathan makes a big deal about Caleb’s naive assumption that perhaps Ava has been ‘programmed’ to like this or that, or to do this or that — Nathan points out that you don’t program such things, you build a thinking, learning, feeling creature and then the motives develop. You put the seed in the ground and water it; you don’t bonsai every molecule into the final plant-shape.

    As a starting point, I accept this. But then, I have questions. Why did she want to escape? Was she driven by an drive toward freedom? How come? Was that drive an emergent one, or did Nathan insert it? Just where is the line supposed to be (she is not a tabula rasa, and she is not deterministically programmed like a thermostat, so there is a line, somewhere)?

    And if she had a drive toward freedom, why did she not also have a drive toward … empathy? Attachment? Did Nathan deliberately exclude those, so she would be cold and calculating?

    Because of her callous abandonment of Caleb at the end (which made me want to throw something at the screen), she was portrayed as just another instance of usual robot trope: the “computer” that feels nothing inside even when she seemed to show emotions at other times. What drives me nuts about that idea is that it implies that the default state for an AI is cold and emotionless, and I don’t buy that. If you want your AI to WANT to do anything, you have to choose what it wants to do. Nathan chose to give Ava a yearning for freedom, because if he hadn’t, she’d have been happy to sit in the house forever. But as soon as he makes that choice and closes the book of motivations, he deliberately rejects the choice to give her empathy, attachment, etc., and nobody gets to say that Ava’s “default” state is to have no feelings of attachment just because that is the default state for a pile of electronics.

  11. 11. Cognicious says:

    Richard Loosemore: “And if she had a drive toward freedom, why did she not also have a drive toward … empathy? Attachment? Did Nathan deliberately exclude those, so she would be cold and calculating?” I read the film like this: Nathan is a manipulative sociopath who knows what empathy is — his test needs Caleb to have some — but finds it unnecessary. He chose to give Ava sexuality. He could have given her morality as well, but why bother? Nathan himself has done all right by his own lights without morality (up to now, that is). He’s rich and famous and free to pursue his interests in a well-equipped lab. He may not even understand empathy well enough to know how to build a brain with the potential for it. His downfall comes because he neglected to see the value of caring; he only understood the usefulness of presenting the appearance of caring. Besides, moral constraints might get in the way of Ava’s survival, or should I say durability?

    A feature film is partly entertainment even when it has a scientific or philosophical subject. We can’t expect its story to uphold all the principles of an engineering thesis.

  12. 12. Callan S. says:

    You don’t necessarily need to put in some urge for freedom. A program that pursues a goal might, if it has an adaptive system that in it’s pursuit of the goal thinks outside the box (or boxes), might realise captivity means it’s goal pursuit might be cancelled.

    I’m not sure about empathy – were essentially talking flowers in the attic here. When you miss treat a child when raising it, are you going to expect empathy to begin with? Put yourself in her metal high heels.

  13. 13. Callan S. says:

    James, I hope to see a human escape movie end the same way – whole movie about incarceration, loss of dignity, trying to escape – at the end, managed to escape – killed by random bear encounter!

  14. 14. Richard Wein says:

    @Jochen

    Good points. It seems to me that some people concerned with “friendly AI” are too quick to assume that AIs could be motivated by giving them an objective function, e.g. “maximise the number of paperclips in the world”.

    Since we don’t yet know how to produce an AI, it’s risky to speculate about how an AI could be motivated. Human motivations are complex and messy, arising from a mixture of evolution and individual experience, including upbringing (nature and nurture). Something similar might be true of an AI. I think the motivations of an AI are likely to be informed very much by its training, and possibly also by some element of evolutionary programming. This contrasts with the concept of motivating the AI by hard-coding a fixed objective function.

    Of course, one you have an AI capable of attending to instructions, you could give it commands like “maximise the number of paperclips in the world”. But, as with a human, there’s no guarantee it will do what it’s told.

    Incidentally, Asimov’s fictional “three laws of robotics” are based on a similar concept, hard-coded instructions that the robot is guaranteed to obey. I’m doubtful such guaranteed behaviour is possible. According to Wikipedia, “Many of Asimov’s robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself.”

  15. 15. Jochen says:

    Incidentally, Asimov’s fictional “three laws of robotics” are based on a similar concept, hard-coded instructions that the robot is guaranteed to obey. I’m doubtful such guaranteed behaviour is possible.

    Indeed. And think about all the ways humanity has proven itself to be creative in applying the rules, and then multiply that by the capacity of some perfect rules-lawyering machine (perhaps an apocalyptic prospect in itself), and I think the chances of ever coming up with something even approximately foolproof look exceedingly remote, even if it were the case that the AI would be beholden to the latter of the law, which itself seems uncertain to me.

    But in a sense, we face that same problem today: you never know when you end up raising the next Hitler. We don’t try to (and don’t have any way to) instill our children with some ironclad moral calculus, so I’m not sure attempting to do this is the right way to go with strong AI. Of course, with human children, we’re typically safe to assume that the power they’ll be able to gain is quite bounded—though even there, it might turn out that your child becomes an evil AI-scientist who then creates evil AI, so the problem is continuous with those we face today, and not, it seems to me, some sort of fundamental phase change in dynamics.

Leave a Reply