Movie News & Reviews

Artificial intelligence as movie plot – next up, ‘Chappie’

Chappie (Sharlto Copley) communes with a furry friend in the futuristic movie “Chappie.”
Chappie (Sharlto Copley) communes with a furry friend in the futuristic movie “Chappie.” Columbia Pictures

“The development of full artificial intelligence could spell the end of the human race,” some egghead told the BBC in December.

That Stephen Hawking, what does he know? He’s the guy who said that if extraterrestrials come to Earth, they might be here to, you know, reap us or something. Wait … “Jupiter Ascending” was right?

But this is all sci-fi stuff, surely. It would take a supercomputer to calculate the number of movies with AI as a prominent element in them, including the new (and mysterious) “Chappie” and the upcoming “Ex Machina,” the “Terminator” reboot and “Avengers: Age of Ultron.” And fewer of us are afraid of Ultron than of James Spader.

So why is Hawking so worked up about artificial intelligence? And we’re not talking about “A.I.,” the Steven Spielberg movie – no one got worked up about that, let’s be honest. Nor are we talking about Rick Perry’s glasses.

Turns out Mr. Big Brain isn’t alone in his worries. A group of smarties called the Future of Life Institute recently circulated an open letter calling for a kind of regulation of the AI industry, in the form of setting research priorities (“worthwhile research aimed at ensuring that AI remains robust and beneficial, and aligned with human interests”) to keep AI a beneficial force toward humanity and presumably prevent Skynet from ever becoming self-aware.

This is no joke; a whole bunch of big-league brainiacs have signed the letter, including AI experts around the globe – such as Hawking and Tesla Motors CEO Elon Musk, perhaps the only man alive who could snatch the “Real-Life Bond Villain” cup from Vladimir Putin or Julian Assange.

Musk told potential evil-enterprise recruits from MIT in October, “If I had to guess at what our biggest existential threat is, it’s probably” AI.

He is a major investor in AI, by the way. And the long-haired Persian cat he doubtless strokes on his lap probably has an artificial brain.

That’s right, they’re calling for regulation of their own industry. Compare and contrast to the financial industry, which gleefully turns the equivalent of invisible flash-trading robots loose to artificially inflate the prices of billions of trades. “I’ve got algorithm, who could ask for anything more?”

MIT professor Max Tegmark, author of “Our Mathematical Universe” and a co-founder of the Future of Life Institute, tells the Chronicle, “There’s a race between the growing power of AI and the growing wisdom about how to keep it beneficial; the purpose of the letter is to spur research boosting the wisdom and helping it win the race.”

But let’s pause this “Terminator” prequel for a moment. There are only so many pod bay doors that HAL 9000 could refuse to open. Isn’t the worst that could happen that Scarlett Johansson would break our hearts, as she did in “Her”? Many of us would happily accept that cost-benefit analysis.

“What ‘Her’ got totally wrong was the impact on society: The jobs of the main character and most others would have been done more cheaply by AIs,” says Tegmark. “What ‘Her’ got right was shifting attention away from robots and toward the elephant in the room: intelligence itself.”

The new film from “District 9” and “Elysium” maker Neill Blomkamp, “Chappie,” dips its robotic toe in the former waters. That is, it apparently does – Sony only screened 20 minutes of it in advance, which couldn’t possibly be a bad sign at the tail end of dump season. It seems to be about a reprogrammed robot cop in Johannesburg that becomes self-aware and is “raised” by criminals instead of being trained by police. So does Chappie go slap-happy, or is its brain on frappe? Only the filmmakers and the folks at the studio who declined to screen it know for sure.

But if Tegmark is right, and the focus should be more on AI itself and not the drone bots that make for cool battle scenes, then what chance are hairless cavemen like us going to have when the machines leapfrog us on the evolutionary ladder of intellect?

“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded,” Hawking said in December, before “The Theory of Everything” got nominated for all those Oscars.

And before any of that happens, the dangers are already present. Last year’s “Captain America: The Winter Soldier” featured the digitized consciousness of Nazi super-scientist Arnim Zola, who designed a targeting algorithm to identify potential threats (i.e., good guys) and automatically eliminate them (i.e., state-sanctioned murder). Of course, this whole notion of “lethal autonomous robotics” is totally crazy fantasy stuff … except that a United Nations expert called for a moratorium on developing such weapons systems in 2013.

There’s apparently a lot of concern about lethal autonomous robotics, which is not the name of an electronica band. Yet. The director of Human Rights Watch’s arms division issued a statement in May 2013 saying, “It is possible to halt the slide toward full autonomy in weaponry before moral and legal boundaries are crossed, but only if we start to draw the line now.”

Can’t we just include a fail-safe in the program – “Ape shall not kill ape,” or in this case, “Super-powerful computing machine shall not eradicate humanity,” a la “Robocop”? Or would the AI just find ways to parse that command, like Frank Luntz in a can? “Demon Seed,” indeed.

Despite the happy-good-time automatons of “WALL-E,” “I, Robot” and Data from “Star Trek: The Next Generation,” it does seem inevitable that machines smarter than humans would decide we’re not worth all the trouble associated with our existence.

It’s the ultimate convergence point of scientific and religious mythologies, where man’s hubris – his own creation – steps on him like a bug. Man plays God, makes a deus ex machina that judges him and finds him wanting.

Sounds like a summer blockbuster to me.

  Comments