If you’re feeling anxious about AI and what it means for the future of humanity, you should watch The AI Doc: Or, How I Became an Apocaloptimist. As I noted in my review, the film aims to deliver some clarity amid all the hype. Now that it’s in theaters, we sat down with director Daniel Roher, who won an Oscar for his film Navalny, to dive deeper into his complicated feelings around AI.

The entire topic made him nervous, Roher said, so he decided to team up with similarly anxious colleagues to demystify AI using film. He describes the goal of the project to be a sort of “first date” with AI, a way to hear about its potential benefits from AI boosters, while also taking in the many negatives brought up by critics. It’s probably too late to stop AI entirely, but he thinks we can at least try to find ways to limit the worst impulses of the tech industry.

“I wanted to make this movie because I was scared shitless, that’s the crux of it,” he said in an interview on the Engadget Podcast. “I didn’t understand what AI was. I didn’t understand why everyone was talking about it and why it seemed to be this thing that came outta the woodwork and all of a sudden, people were talking about it like it was the apocalypse or like it was gonna be the most optimistic, greatest thing ever.”

Ultimately, Roher arrived at the term “apocaloptimist,” which balances the contradictory ideas that AI can both seriously harm society, and that we can still shape the future by criticizing or outright rejecting it. “It’s a worldview. It’s choosing not to buy into a binary that’s asking us to see this as either apocalypse and the end of the world, or through the rose-colored glasses of unvarnished optimism, which is also sort of a fallacy,” he said.

On the one hand, he’s well aware the major players pushing AI are, at best, flawed. When I mentioned Marc Andreessen’s recent comments about proudly having no inner thoughts, Roher added,” They’re just fucking weird. They’re just nerds who became billionaires because they were born at the right time and they had the right interests. They’re brilliant in their own way and they have abilities, but they don’t understand what it is to exist. They don’t know what real human beings navigate and go through.They have a very narrow worldview that’s callous and cold and calculated.”

For many, the overnight ubiquity of this largely untested technology and the collective wealth and power of those supporting it means rampant negative externalities are all but guaranteed. But Roher’s apocaloptimism (we’ll see if the term quite catches on) chafes against cynicism and doomsaying. He points to OpenAI’s Sora video generation app, which was heavily criticized as a tool that could lead to more realistic deepfakes, but was unceremoniously killed last week.

“I think people were [made] uncomfortable by it, and good,” Roher said. “And, shame on OpenAI for releasing this thing without any thoughtfulness. I guess the low bar of like, at least they had the decency to pull back and retract it, but only after public condemnation.” He added, “to the cynical people saying we’re all fucked, I’m like, no fuck you, we’re not. Collective action matters.”

And notably, the entire goal is to think more deeply about the uses of technology than the people actually creating it. “These guys, when you actually sit down with them, they don’t have clarity, they can’t make you feel better. They don’t know themselves. They’re just motivated by the unbridled optimism of the greatest profit-making technology in the history of humanity. “

Share.
Leave A Reply

Exit mobile version