The rise of AI in Hollywood decision-making processes has brought about a new age of creative exploration and has led to the development of exciting new technologies such as chatbots and AI-generated videos. However, as with any new technology, there is always a darker side to consider. In Hollywood, the black box algorithms that dictate what’s popular are starting to influence which stories are told and how. This is particularly concerning in the documentary and nonfiction space, where the stakes are dire.
Today, every creator on earth feels the guiding hand of AI. TikTokers, for example, are rewarded with massive views for tailoring content to an algorithm that is meticulously designed to trigger dopamine release. In Hollywood, producers are rewarded with lucrative film deals for developing projects that feed the black box AI at studios and streaming platforms, which keep valuable viewership data insights to themselves. That viewership data is built via feedback loops created by recommendation engines reinforced by the very viewer behaviors they shape in the first place. It is value creation increasingly usurped by machines, and between TikTok and streaming platforms, the precious space that allows for human-first innovation is closing. TikTokification is metastasizing.
However, the greatest threat posed by ambient machinery isn’t the bottom-up, AI-generated art populating social media (think: Wes Anderson Directs Star Wars). It is the top-down, AI-powered platforming of art, which we’re already seeing across the media landscape. Algorithms are deciding, on a global scale, which stories to tell and how, and it is especially insidious in the realm of nonfiction.
The danger is less about AI in the creation of documentary, the actual production, and more in the curation of it. Proprietary algorithms shape the decisions around what human audiences are exposed to, what gets bought and when, what gets platformed and where, and what stories get told. Media veteran and producer Evan Shapiro rightly points out that outsourcing accountability is a time-honored tradition in Hollywood. AI is simply the latest excuse fad.
However, AI is already hard at work at every level of filmmaking. At XTR, which backed Magic Johnson doc They Call Me Magic for Apple TV+, CEO Bryn Mooser has built a proprietary algorithm named “Rachel” to help guide his development process. He calls it a “zeitgeist machine” that combs through social media to see what’s trending and then focuses his development around those signals. While he faced criticism initially, he argues that it is a powerful tool that can enhance what filmmakers can do, and he hopes that it is embraced more widely.
It’s true as well that human executives still make the final greenlight decisions at these platforms, but with the growing wealth and power of AI-generated data insights, executive willingness to die on the hill of one’s own (human) opinion is fading. Why take risks on more novel concepts when, for example, the true crime genre is a sure-fire hit factory, according to the data? It’s human nature, especially in this job market, for an executive to cover themselves. But in Hollywood’s rampant CYA culture, now AI-powered, executives may be covering themselves out of existence.
The unchecked race to maximize viewer engagement is a race to the bottom. Worse, from a journalistic ethics standpoint, in the realm of nonfiction, it is a race to ignorance and delusion. AI has already been used to recreate the voices of deceased celebrities, such as Anthony Bourdain and Andy Warhol, leading to widespread ethical breaches in the documentary craft.
The rise of AI in Hollywood decision-making processes has brought about many exciting new opportunities for creative exploration, but it also poses significant ethical concerns. As AI continues to play a more prominent role in the film industry, it is essential that we address these concerns head