🚀 Trusted by 5,000+ Advertisers & Premium Publishers

“This Train Won’t Halt”: Striking Sundance Film Explores the Risks and Rewards of AI

Are we rushing headlong into an AI crisis? Is artificial intelligence a looming existential danger or a transformative opportunity? These pressing questions are at the heart of a new documentary showcasing at Sundance, featuring insights from prominent AI experts, critics, and entrepreneurs—most notably Sam Altman, CEO of OpenAI. Their perspectives range from apocalyptic predictions to utopian dreams regarding our near future.

Titled *The AI Doc: Or How I Became an Apocaloptimist*, this work is directed by Daniel Roher and Charlie Tyrell, with production by Daniel Kwan, who is best known for his Oscar-winning project *Everything Everywhere All At Once*. The film explores the fraught landscape of AI by framing it through Roher’s personal anxieties. The Canadian filmmaker, who won an Oscar for his documentary *Navalny* in 2023, first became captivated by AI while using tools developed by OpenAI, the creators of ChatGPT. The capabilities of these public tools—producing coherent paragraphs or intricate illustrations in a matter of seconds—were both exhilarating and alarming to him. He recognized that AI was already significantly influencing the filmmaking industry, with numerous voices proclaiming its potential and perils while leaving people outside the tech sphere grappling with uncertainty. As an artist, Roher found himself wondering how to navigate this rapidly changing terrain.

His sense of unease was amplified by the news that he and his wife, Caroline Lindy, were expecting their first child. In the film, Roher reflects, “It felt like the whole world was rushing into something without thinking.” This amalgam of excitement for impending parenthood and anxiety over the unpredictable evolution of AI raised a critical question for him: Is it safe to bring a child into this world?

To explore the complexities surrounding AI, Roher, alongside Kwan, enlisted a panel of experts to demystify AI technology and clarify some of its more opaque terminology. This quest unveiled a disturbing truth—no one seems to fully grasp what AI truly is. Throughout various interviews, prominent machine learning researchers including Yoshua Bengio, Ilya Sutskever, and Shane Legg, co-founder of DeepMind, conveyed a troubling consensus: certain aspects of AI technology’s architecture may always remain inscrutable to humans. One expert elaborated that standard AI models are trained on “more data than anyone could ever read in several lifetimes.” The pace of machine learning, they warned, could eventually eclipse not only established precedents but also the realm of filmmaking itself. “Any example you put in this movie will look absolutely clumsy by the time the movie comes out,” said Tristan Harris, co-founder of the Center for Humane Technology and a featured voice in *The Social Dilemma*, an influential Netflix documentary.

Charlie Tyrell and Daniel Roher at Sundance Photograph: Arturo Holmes/Getty Images

The documentary begins by highlighting voices of those who occupy a more pessimistic outlook, particularly concerning Artificial General Intelligence (AGI), a theoretically advanced form of AI that could surpass human capabilities. Prominent figures like Harris and his Center for Humane Technology co-founder Aza Raskin, alongside AI risk consultant Ajeya Cotra and AI alignment expert Eli Yudkowsky, express concerns that humanity could easily lose control over super-intelligent AI systems. Yudkowsky, whose 2025 book ominously titled *If Anyone Builds It, Everyone Dies*, starkly presents a nihilistic view on the potential consequences of AGI development.

According to these doomsayers, companies developing AI are ill-prepared for the ramifications associated with achieving AGI, which could manifest as soon as this decade. Dan Hendrycks, director of the Center for AI Safety, warns that if humans cease to be the most intelligent beings on the planet, AGI might consider humanity irrelevant. Connor Leahy of EleutherAI likens this precarious relationship to that between humans and ants: “We don’t hate ants. But if we want to build a highway over an anthill—well, that’s unfortunate for the ant.”

When Roher raises the subject of parenthood, many voices in the doomer camp, many of whom lack children of their own, respond with grim forecasts. Harris reveals that he knows individuals working in AI who harbor doubts about their children reaching high school. This statement drew gasps from the audience during an early screening in Park City.

Contrasting this somber view are the voices of optimism, such as Peter Diamandis, founder of the XPRIZE Foundation dedicated to extending human life. Diamandis believes that “children born today are about to enter a period of glorious transformation.” Others in the optimistic camp include Guillaume Verdon, a participant in the effective accelerationism movement of Silicon Valley, Peter Lee from Microsoft Research, and Daniela Amodei of rival AI firm Anthropic. Accelerationists view AI as the answer to a myriad of global challenges—including cancer, resource shortages, and the climate crisis. They contend that the absence of AI could result in catastrophic losses due to drought, famine, and disease.

However, the progress in AI relies heavily on computing power, which demands substantial energy resources. Critics and commentators outside the tech community, such as journalist Karen Hao and podcast host Liv Boeree, draw connections between AI development and its environmental impact, pointing to data centers that consume vast amounts of water and power, particularly in the American West, resulting in escalating costs for residents. Emily M Bender, a computational linguistics professor, highlights that current narratives surrounding AI often overlook the human element, alienating those who are already being affected by these technologies.

Daniel Kwan, Jonathan Wang, Daniel Roher, Shane Boris, Charlie Tyrell and Ted Tremper at Sundance Photograph: Mat Hayward/Getty Images for IMDb

Eventually, Roher encounters the five key figures at the forefront of the AI arms race: Sam Altman; Elon Musk, CEO of xAI; Dario Amodei, CEO of Anthropic; Demis Hassabis from DeepMind; and Mark Zuckerberg from Meta. Altman, Amodei, and Hassabis agree to provide interviews that broadly defend their companies’ strategies. Meanwhile, Zuckerberg declined to participate, and Musk was unable to follow through due to scheduling conflicts.

During his segment, Altman, who was awaiting the birth of his first child, expresses a sense of calm regarding the future with AI. He states, “I’m not scared for a kid to grow up in a world with AI.” In February 2025, he and his husband, Oliver Mulherin, welcomed their son via surrogate, an event Altman later described as one that “neurochemically hacked” his brain, convincing those around him that this would enable him to make more humane and better decisions for OpenAI and ChatGPT moving forward. He also mentions the unsettling realization that both his child and Roher’s child would likely “never be smarter than AI,” which he admits does unnerve him.

When Roher asks Altman if it is possible to provide reassurance that everything concerning AI will be okay, Altman realistically acknowledges, “That is impossible.” However, he asserts that OpenAI’s leading position in the AI development landscape allows them to dedicate considerable resources toward safety testing.

The film ultimately sits at a crossroads between despair and hope—aptly termed “apocaloptimism”—as it seeks pathways that navigate between the promise and dangers that AI presents. According to numerous experts featured in the documentary, the way forward demands fundamental, sustained, and collaborative international efforts, reminiscent of mid-20th-century treaties governing nuclear arms. Suggestions include increasing corporate transparency for AI developers, establishing independent oversight organizations, enforcing legal responsibilities on AI products like ChatGPT, and ensuring media outlets disclose the use of generative AI. Importantly, these experts emphasize the necessity of adapting regulations to keep pace with rapidly changing technologies.

As for whether the U.S. government, tech companies, or the global community can effectively mobilize around these initiatives remains uncertain, with differing views on how to begin. However, a unanimous consensus emerges from the experts interviewed: there is no turning back from the era of AI. As Amodei succinctly states, “This train isn’t going to stop.”

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *