Film‎ > ‎Papers‎ > ‎

Ex Machina

EX MACHINE
A TANTALIZING SCI-FI DRAMA THAT IS PART HORROR, PART ETHICS/MORALITY TALK AND PART ROMANCE 

Presented at the Film Program Sponsored by
Houston Psychoanalytic Society and Jung Center

by
Donna R. Copeland, PhD 
August 11, 2016 

        Alex Garland, the writer/director of this film, weaves a tantalizing sci-fi drama that is part horror, part ethics/morality tale, and part romance.  In keeping with the title, the work is constructed so that by the end of the story we have been given information that informs us of tragedy, but the ultimate outcome is left hanging; that is, how Ava fares in the outside world.  “Deus ex machina” is a literary term meaning “god from the machine”, and refers to a plot device whereby a problem towards the end of the story is unexpectedly solved by the insertion of something or someone new.  In this case, it’s Ava escaping.  Although the “Deus” is left out of the title of the film, references are made to the god-like role Nathan has played (and that’s how he regards himself, I think)—the role he’s played in creating Ava and the other robots, particularly since his inventions are made more human by being apparently sentient and emotionally aware. 

Garland is a master at juxtaposition:  We are shown an ultra-modern home/laboratory set in a lush natural setting, a research site that appears to run smoothly and effortlessly with the installation of the latest technology and a researcher fully in control.  Yet, it could be called a “house of paranoia.”  Nathan has sealed it from the outside world to protect the property and his inventions.  He doesn’t even feel comfortable calling in the original installers of the fiber-optic cable to fix what he thought was a glitch in the electrical system.  Another juxtaposition is Caleb’s being unceremoniously dropped off a helicopter into a meadow with only vague instructions on how to get to the house and get inside, with no one there to meet and greet him.  Yet, inside, the opening scenes are serene, accompanied by the soft music of Schumann’s “Scenes from Childhood.”  But it’s only a matter of time before things start to feel strange, and Caleb feels put on the spot by Nathan.

Frequently, scenes that are bright and revealing will suddenly turn dark, and secrets come to light.  So there are many juxtapositions between highly controlled and uncontrolled or uncontrollable.

Nathan

Now let’s look at Nathan.  When Caleb arrives and finally gets through the house, he sees fitness-freak Nathan at a punching bag.  Nathan guides Caleb through the property with enigmatic descriptions and instructions, justifying some doors being locked and denying Caleb entrance, “so he won’t have to make decisions himself about which rooms to go into.”  Ah, Nathan, what a character!  He’s a complex figure who enjoys being mysterious and keeping things from others.  He completely manipulated Caleb to get him to his house.  He repeatedly asserts his authority, yet calls Caleb “bro” and “dude”, saying there would be no social hierarchy.  Also troubling is his use of double binds:  “No, you don’t have to sign that [complicated] nondisclosure agreement, but you will give up your opportunity to work with me; you can just drink and shoot pool instead.”  

Nathan is not just smart in his scientific ability, he can “read” people as well, which allows him to anticipate impulses and thoughts they have and plan accordingly to help or hinder them.  Example:  Realizing the cause of the power outages and installing video cameras in Ava’s room.  But despite his astuteness in observing others, Nathan seems to have been caught off guard completely by his invention in the end, giving credence to some people’s thoughts and fears about AIs.

Nathan was very forward thinking and clever in designing Ava so that her appearance easily gives away the fact that she is an AI.  His reasoning is that he wants to know whether, if Caleb could clearly see that she’s a robot, he could still regard her as human.  That is Caleb’s main task for the week he is there—to assess Ava’s sentience or consciousness.  His test goes beyond the conventional Türing Test, named for Alan Türing, a British scientist preoccupied with the question of whether machines could “think” as humans do, using intelligence, decision-making, and personality.  If a robot passes the test he devised, the machine would be convincing as a human figure.  Nathan’s test that Caleb will use is somewhat different in that the Türing Test is conducted auditorially; whereas Nathan wants Caleb to see Ava’s robotic appearance and still be convinced she is “human.”

Can Caleb determine whether Ava has feelings for him?  She certainly flirts with him and seems interested in his personal life and background, but, robot-like, she feeds back to him a version of what he’s said to her about her art—telling him to choose what he wants to tell her about himself.  “I’ll be interested in seeing what you choose”, she mimics.  Is she using a higher order of thinking here, applying what he has said to what she wants to say in a joke-like fashion?  Or is it simply a form of echolalia?  Nathan indicates that it’s higher order thinking.  Eerily, he also hypothesizes about the possibility that Ava does not have feelings for Caleb, but is only using him in order to escape.

When Caleb tells Ava about his parents’ deaths, she seems genuinely moved in her facial expression and says, “I’m sorry” with feeling.  Nathan hacked into all the world’s cell phones to program facial expressions into Ava’s hardware.  He made them “fluid, imperfect, patterned, chaotic, just like human emotions.”  But how did he get her to access these and use them appropriately?  This is another one of Nathan’s secrets.

Nathan didn’t tell Ava that Caleb was there to test her.  When Caleb does tell her this, he asks her how it makes her feel.  “Sad”, she says.  Is this a genuine expression of sadness or simply a conditioned response based on programming?  Ava asks more questions about Caleb that sound rather like a psychologist.  What is his favorite color?  His earliest Memory?  “Are you a good person?” she asks.  Ava seems to know when Caleb is lying, and exactly how interactions that are intimate go.  “Do you want to be with me?” she says. “If I fail the test, will I be switched off?”  Even if Ava uses Caleb as a means of escape—rather than actually caring about him—might that be just as much an indication of her humanness?  And finally, is Ava really human-like if she has no conscience?  Would Nathan have had to program that in, and if he didn’t, why not?  (Perhaps because he doesn’t have one…)

Ex Machina is a good example of the process of assessing the sentience of robots,  moral/ethical concerns about them, and the god-like endeavor in creating humanoids.  Someone called the movie a morality play on the ethics of A.I. (Mike Reyes, http://www.cinemablend.com/new/Ex-Machina-Ending-Debate-Movie-3-Minutes-Too-Long-71101.html).

Artificial Intelligence

Now, I would like to segue into a short discussion about artificial intelligence. Ultimately, with this film, we get to the question of artificial intelligence and all its complexities; specifically, what makes humans human.  We certainly get the impression that Nathan has gone farther than anyone in the real world has in creating AIs that could pass for humans.  But although in the real world we may have a long way to go, it is happening, and many think we should proceed cautiously.  For this section of my talk, I relied on Sherry Turkle’s writings.  Turkle is a psychologist and MIT professor who is currently the director of MIT’s Initiative on Technology and Self, and has advocated for a thoughtful approach to the AI endeavor, saying that technology, and media technology in general, is already changing us and our culture.  

Turkle makes a number of important observations (Turkle, Sherry (2002).  Whither Psychoanalysis in a Computer Culture? http://www.kurzweilai.net/whither-psychoanalysis-in-a-computer-culture):  One is that we have an evolving relationship with computers, which affects our identity and sense of self, which affects our ways of connecting with other people in ever-expanding networks, and which makes users of computers companions with their computers.  She says that ultimately, there is the potential that computers will be “programmed to assess their users’ emotional states and respond with emotional states of their own.”  This is already being done with some toy animals and dolls.  In the past, transitional objects were passive targets of projection; now, these relational objects actively respond, such as a digital doll crying and asking for a hug.

Turkle notes that all these characteristics together imply a re-thinking of object relations theory where objects like computers will be regarded in the same way as people have been in traditional Object Relations theory.  She quotes a 13 year-old girl as saying, “When you program a computer there is a little piece of your mind, and now it’s a little piece of the computer’s mind.  And now you can see it.”  In another anecdote, Turkle tells about a scientist who designed a robot named “Kismet”, and had it for some time.  However, when she left the institution where she worked, she had to leave it behind, since it was the institution’s property.  But she apparently experienced a keen sense of loss and grief afterwards (somewhat like grieving for a child, I imagine), and she had the thought that building a duplicate just wouldn’t be the same.

Turkle has also pointed out how the Internet may be a powerful evocative object in a person’s identity.  There, it’s possible to take on multiple identities, varying style, gender, and personality at will, such as in chat rooms, games, and in an infinite number of settings. Under these conditions, identity is no longer considered unitary; it’s “decentered” and fluid.  People go online and play multiple roles in multiple settings, experiencing “parallel lives.” It’s even possible to work through conflicts and try out novel solutions on the computer.

Turkle [along with Ray Kurzweil, the inventor and AI scientist (Kurzweil, Ray (2005). The Singularity is Near. New York: Viking Books.)] claims that “There is every indication that the future of computational technology will include relational artifacts that have feelings, life cycles, and moods; ones that reminisce, and have a sense of humor—that say they love us, and expect us to love them back.”  The question of what computers can do and what they’ll be like includes “What will we be like?”   Turkle asks, “What kinds of people are we becoming as we develop more and more intimate relationships with machines?”

Finally, the movie plays on our fears about the future; for instance, whether AIs could make humans dispensable.  Stephen Hawking has actually predicted that, saying that the development of full artificial intelligence could ultimately lead to the end of the human race (The Guardian, 12/2/14, https://www.theguardian.com/science/2014/dec/02/stephen-hawking-intel-communication-system-astrophysicist-software-predictive-text-type).  AIs are certainly in our future.  A recent article in the Houston Chronicle headlined “Artificial Intelligence is Swarming Silicon Valley on Wings and Wheels” (John Markoff, 7/18/16).  New investments are expected to reach $1.2 billion this year, up 76% from last year.  The author thinks that machines with human level intelligence are on the horizon.  Some of the companies already investing are IBM, Google, Amazon, and Microsoft—no surprise there.

So, in sum, I think the movie Ex Machina is effective in eliciting fantasies about what it would be like to have robots in our lives.  Now, I’d like to hear your thoughts about it and about the movie.