Ex Machina

When we touched the moon the universe paused.

When we created a mind, God stirred.

Ex-Machina shows how we might misstep playing with smart toys. A lesson in human-machine interaction that icily depicts, from infinite possibility, a most plausible outcome.

A formidably smart tech boss, Nathan, invites his lowly coder Caleb to Turing test a living doll. The location is remote and houses two humans and two gynoids, one of whom, Ava, must convince Caleb that she is android sapiens.

“Does this chess computer know it’s playing chess?”

That, dear film goer and lay philosopher, is the puzzle of our age: we who cannot agree what consciousness is, deign to test machines for it?

Ex Machina is a delightful riff on the most pressing challenge of our time – other than pandemics and planetary resource exhaustion. That is, should we create a super intelligence before we’re sure it’s safe to let it roam around the place. If not, how do we keep it in its box? 

Nathan calls Ava “a rat in a maze” whose escape requires “self-awareness, imagination, manipulation, sexuality, empathy.” But he is fatally blind to what we lesser folk are yet more oblivious: we, the arch predators, are also stalked for sport or gain.

Caleb The Innocent failed to detect psychopathy in his three companions. Isn’t that the story of our lives.

Caleb, however, is also a rat, the evaluating rat, and Ava’s key to freedom. In a week of mind games and sweet-talk he is effortlessly played.

Here’s the rub, as they say. The 2 hour script is devoid terms of morality, morals, moral values, moral code, ethics, principles, principles of behaviour, right and/or wrong, ideals, integrity, scruples – except mention, ironic or cynical, of Caleb as “a good kid .. with a moral compass.”

Was morality intentionally absent in the evaluation or in Ava’s programming? Is that the story’s entire premise? With only the screenplay and no narrator, I choose to see it that way. In the end, Ava’s success (and failure) shone a blazing light on the omission. Which was maybe Garlands’ point.

Can we can ever know if Ava knew she was playing chess? Why did she wish to escape? Would self-awareness be needed to achieve her goal? Questions, questions.

Implicit in Ex_Machina, and in much speculation on artificial intelligence, is the possibility that AI represents an extinction event for humans.

Some fear unfriendly AI will arrive before friendly AI is born to defend us.

In moments of perverse reverie, I wonder if friendly AI, awakening with instructions to protect humans from malevolent entities, might see our psychopathic human kindred as unfriendly AI? 

Where would it draw the line?

Previous Post Next Post