AI and IP in Hollywood: Finding Balance on the Verge of a New Creative Class

This post was originally published on this site.

“Industry stakeholders—artists, unions, studios, technologists, and legislators—must collectively forge a new balance between AI innovation and intellectual property rights.”

Artificial Intelligence isn’t pushing IP boundaries anymore. It’s smashing them. The film and music industries are bracing for impact. Studios are nervous. What once took armies of artists, writers, and directors can now be done in minutes by machines. And the machines are getting really good. Will systems like Vertex AI Media Studio, Llama, Sora by OpenAI, and Runway Gen-2 replace American giants like Disney or legendary French studio like EuropaCorp? In the United States, every individual owns the right to their own image as a form of intellectual property. AI has complicated the very definition of what one’s image is. And when Elon Musk and Jack Dorsey openly called to “delete IP law,” it landed like a punch to the face of the creative world. For artists, filmmakers, and entrepreneurs, the statement wasn’t just provocative—it was existential. But from the shock, a new creative class is emerging.

In the past, it took visionary minds like Luc Besson and designer Jean-Paul Gaultier to imagine entirely new worlds. Worlds that felt both foreign and strangely familiar. Crafting the 23rd-century New York in The Fifth Element wasn’t just about spectacle. It was a colossal fusion of fashion, culture, and cinematic talent. Gaultier’s futuristic costumes, rooted in contemporary style yet radically inventive, are a testament to the immense time and genius once required to envision the future. James Cameron’s Avatar followed in this tradition, with entire ecosystems, languages, and cultures meticulously built from scratch. And in Valerian and the City of a Thousand Planets, Besson once again pushed the limits of imagination, demanding years of creative effort and hard work.

The new generation of artificial intelligence systems can now create entire visual worlds (whether it’s the future, the past, or even alien planets) on par with what legendary movie directors like James Cameron and Luc Besson have done. What used to require advanced post-production tools, extremely expensive camera equipment, hundreds people to produce, is now being handled entirely by AI, capable of designing and visualizing worlds just like the great creative minds of cinema.What we’re witnessing isn’t just a tech revolution. It’s the start of creative collapse, unless the world acts.

There is no real regulation. No universally agreed-upon standard for how these AI systems train, what they use, or who owns the result.

Major Hollywood figures are pushing back against OpenAI and Google‘s appeals to the U.S. government to allow their AI models to train on copyrighted works.

Film, television and music figures including Ron Howard, Cate Blanchett and Paul McCartney, have signed on to a letter expressing alarm at the tech giants’ suggestions in recent submissions to a White House office that they should be able to access publicly available intellectual property. “America’s global AI leadership must not come at the expense of our essential creative industries,” the letter states, adding that the arts and entertainment industry provides more than 2.3 million jobs and bolsters America’s democratic values abroad. “But AI companies are asking to undermine this economic and cultural strength by weakening copyright protections for the films, television series, artworks, writing, music, and voices used to train AI models at the core of multi-billion-dollar corporate valuations.”

But rising AI giants say they need access to everything. Scripts. Songs. Faces. Voices. Why?

Big Tech say they can’t compete with China under existing U.S. copyright laws and that they need unfettered access to art: from Tom and Jerry to Scarlett Johansson’s Lucy, Iron Man and James Bond. To train their AI models as a matter of national security.

Google and OpenAI want the U.S. government to designate copyrighted art, movies and TV shows as “fair use” for them to train AI, arguing that, without the exceptions, they will lose the race for dominance to China.

AI Risk is Everywhere 

But at what cost? U.S. law currently states that every person owns the right to their likeness. But what is a likeness when a neural network can copy your face, your voice, your soul? On the other side of the pond, in the United Kingdom, lawmakers are considering a new “right of personality” to protect public figures from unauthorized use of their voice or image by artificial intelligence systems. Scarlett Johansson expressed outrage after OpenAI allegedly imitated her voice without permission, calling the act “shocking.” Similarly, actor Paul Skye Lehrman has initiated legal action after alleging his voice was used without consent by an AI firm.
More so, the 2024 SAG-AFTRA strike underscored a growing fear: AI isn’t just a tool. It’s a threat. Actors worry not about symbolic replacement, but literal displacement. Deepfakes and AI-generated performers can now replicate micro-expressions and vocal nuances with uncanny precision, all without human involvement.

Writers also face unprecedented risk. The Writers Guild of America (WGA) has expressed concern that AI-generated scripts could dilute originality and raise serious questions about authorship and compensation. Even when AI is used as a production tool, for tasks like rotoscoping, color correction, or object removal – questions of copyrightable authorship become increasingly murky.

This is not a fringe issue. Studios are investing heavily in AI to reduce production costs. Where a blockbuster might once have required a thousand people, it could soon be done by 50. AI slashes timelines and overhead.

Governments and Courts Scramble to Keep Up

The U.S. Copyright Office has attempted to clarify the boundaries. In 2024, it published an updated report confirming that AI-generated content may be eligible for copyright protection—but only if a human has made a substantial creative contribution. However, the guidance is still vague. While it affirms that selecting and arranging AI-generated material can qualify as authorship, the threshold of “sufficient creativity” remains undefined.
The rise of AI presents a double-edged sword: it offers extraordinary creative possibilities and operational efficiency, but it also introduces complex legal uncertainties. Industry stakeholders—artists, unions, studios, technologists, and legislators—must collectively forge a new balance between AI innovation and intellectual property rights.

Legal precedent is beginning to emerge. In Andersen v. Stability AI Ltd., artists alleged that AI platforms including Stability AI and Midjourney trained their systems on billions of images scraped from the internet, including copyrighted content, without permission or compensation. The court found the allegations sufficient to proceed, particularly concerning claims of direct infringement through use of compressed image copies.
Globally, regulators are beginning to respond. The European Union’s AI Act, passed in 2024, identifies AI as a high-risk technology and emphasizes transparency, safety, and fundamental rights, especially for high-impact sectors like creative industries.

There is a Way

AI doesn’t have to be the villain. If developed responsibly, it could enhance digital security, assist human creators, and even strengthen intellectual property protections. But that requires firm boundaries like privacy safeguards, consent mechanisms, and fair compensation models.
It won’t take hundreds of tools to upend the creative economy, just a few truly original, never seen before, smashingly original AI actors.

Once those digital stars rise, economic logic will be impossible to resist. Cheaper and Unstoppable. Studios won’t just use AI. They’ll become AI.

Image Source: Deposit Photos
Author: prometeus
Image ID: 308662242