Apocaloptimist? There is such a thing and here it’s related to AI.

A new documentary premiering at the Sundance Film Festival has placed the debate over AI squarely in the cultural spotlight, capturing both the optimism and deep unease surrounding a technology that is rapidly reshaping economies, industries and daily life.

“The AI Doc: Or How I Became an Apocaloptimist,” co-directed by Oscar-winning film-maker Daniel Roher and Charlie Tyrell, examines the promises and risks of AI through a personal lens, while bringing together some of the most influential voices shaping the global AI conversation. The film arrives at a moment when AI systems are being adopted faster than regulatory frameworks can keep pace, raising urgent questions about safety, governance and social impact.

Origins of a personal and global anxiety

Roher, who won an Academy Award in 2023 for “Navalny,” traces his interest in AI to his early experiments with publicly released tools from OpenAI, including ChatGPT. The speed and sophistication with which these systems could generate text and images impressed him, but also heightened his concerns about how little the public understands technologies that are already influencing creative work, employment, and information flows.

Those concerns intensified when Roher and his wife, film-maker Caroline Lindy, learned they were expecting their first child. “It felt like the whole world was rushing into something without thinking,” Roher says in the film, connecting personal fears about parenthood to broader uncertainty about AI’s trajectory. His central question becomes whether it is safe to bring a child into a world increasingly shaped by machines whose long-term behavior remains unclear.

Explaining the technology behind the fear

To address that question, Roher convenes leading AI researchers and industry figures to demystify the technology itself. Interviewees including Yoshua Bengio, Ilya Sutskever, and DeepMind co-founder Shane Legg explain that modern AI models are trained on vast datasets far beyond human comprehension. As one expert notes, these systems absorb “more data than anyone could ever read in several lifetimes.”

Several researchers acknowledge that parts of advanced AI systems operate in ways even their creators cannot fully explain. This lack of interpretability, combined with rapid development cycles, complicates oversight. Tristan Harris, co-founder of the Center for Humane Technology, warns that the pace of change is so fast that “any example you put in this movie will look absolutely clumsy by the time the movie comes out,” underscoring the challenge facing regulators and educators alike.

Warnings of existential risk

The film gives significant airtime to so-called AI “doomers”, who argue that artificial general intelligence, a hypothetical system capable of outperforming humans across most cognitive tasks, could pose an existential threat. Voices including Harris, Aza Raskin, Ajeya Cotra, and AI alignment researcher Eliezer Yudkowsky argue that once AI systems surpass human intelligence, maintaining control may be impossible.

Dan Hendrycks, director of the Center for AI Safety, suggests that AGI could “become superhuman maybe in this decade,” a prospect that raises alarms about unintended consequences. Connor Leahy of EleutherAI likens the potential relationship between humans and superintelligent systems to that of humans and ants: “We don’t hate ants. But if we want to build a highway” over an anthill – “well, sucks for the ant.”

Some interviewees link these risks to deeply personal decisions. “I know people who work on AI risk who don’t expect their child to make it to high school,” Harris says, a statement that drew audible reactions at a Sundance preview screening.

Optimism and the case for acceleration

Balancing these warnings are technologists and investors who see AI as essential to solving major global challenges. Figures such as XPRIZE founder Peter Diamandis, Microsoft Research president Peter Lee, and Anthropic president Daniela Amodei argue that AI could accelerate breakthroughs in medicine, climate mitigation, and resource management.

Diamandis claims that “children born today are about to enter a period of glorious transformation,” reflecting a belief that slowing AI development would itself be unethical given the scale of problems it might help address. Supporters of this view argue that without AI, millions of future lives could be lost to disease, hunger, and environmental collapse.

Hidden costs and real-world impacts

A third perspective in the film focuses on the material consequences of AI development. Journalists and researchers including Karen Hao and computational linguist Emily M. Bender point to the environmental footprint of data centers, which consume vast amounts of electricity and water, particularly in drought-prone regions of the US.

Critics argue that dominant narratives about AI often obscure the human labor, environmental strain, and social disruption already associated with the technology. According to Bender, current framing can “exclude and dehumanize the people it is already impacting”, a dynamic likely to intensify as AI systems scale.

Power, responsibility, and regulation

The documentary culminates in interviews with leaders at the center of the AI arms race, including OpenAI CEO Sam Altman, Amodei, and DeepMind CEO Demis Hassabis. Altman, who became a parent in 2025, says he is “not scared for a kid to grow up in a world with AI,” though he acknowledges that children today will “never be smarter than AI,” a reality that “does unsettle me a little bit.”

Asked whether he can reassure Roher that everything will be fine, Altman replies: “That is impossible,” while emphasizing his company’s focus on safety testing.

The film ultimately argues for stronger international coordination, greater corporate transparency, legal accountability, and adaptive regulation, drawing parallels with mid-20th-century efforts to manage nuclear technology. Whether governments and companies can act quickly enough remains unresolved, but as Amodei states in the film: “This train isn’t going to stop.”

“The AI Doc: Or How I Became an Apocaloptimist” is screening at the Sundance Film Festival and is scheduled for release on 27 Mar.

At our sister site eWeek, a documentary premiering at Sundance reckons it reveals the disturbing reality behind AI’s rapid rise.

Share.
Leave A Reply

Exit mobile version