April 4, 2025

Entertainment Leaders Warn Against Weakening Copyright for AI and Other News

Photo by Steve Johnson on Unsplash

Entertainment Leaders Warn Against Weakening Copyright for AI

Hundreds of Hollywood A-list celebrities and entertainment industry professionals, including Paul McCartney, Ava DuVernay, Taika Waititi, Cate Blanchett, Natasha Lyonne, Alfonso Cuarón, Lilly Wachowski, Ben Stiller, Carrie Coon, and Lily Gladstone, have signed an open letter opposing the relaxation of AI-related copyright laws. The letter addresses recent proposals from OpenAI and Google, urging the government to remove copyright protections for AI training purposes. The signatories argue that such changes would harm creative industries. OpenAI and Google have appealed to the Office of Science and Technology Policy this month, advocating for AI developers’ access to copyrighted materials for training purposes, claiming it would benefit AI development.

SAG-AFTRA, representing 160,000 performers, successfully negotiated for actors’ rights regarding AI usage after a 118-day strike. The union secured agreements requiring consent and fair compensation for the creation and use of digital replicas in film and TV productions. The new contract ensures that actors maintain control over their digital likenesses and receive appropriate payment, even when AI-generated versions perform roles. In support of these efforts, California Governor Gavin Newsom enacted two laws in 2024 to protect actors from unauthorized AI replicas. One law mandates that labor contracts explicitly state any plans for AI-generated replicas. The other prohibits the commercial use of deceased performers’ digital replicas in various media without consent from their estates, covering TV shows, films, and video games.

Guardian

Showbiz Contracts: Adapting to the AI Revolution

As AI infiltrates the entertainment industry, new laws are emerging to regulate its use and protect personal and biometric data in AI training. These regulations include state privacy laws and specific legislation governing AI in the industry. For entertainment professionals — including performers, agents, studio executives, and attorneys — who regularly negotiate and draft agreements, it’s crucial to stay informed about these evolving laws. They must adapt their contract templates to ensure compliance and enforceability in light of these new AI-related legal requirements. Understanding and incorporating these regulations into agreements is essential for protecting the rights of all parties involved and maintaining legal compliance in an increasingly AI-influenced entertainment landscape.

The 2023 Screen Actors Guild agreement with film and TV studios introduced rules for “digital replicas” — AI-generated synthetic performances using actors’ images, voices, or likenesses. This required specific consent from SAG actors. In 2024, California and New York broadened these concepts to encompass all AI-created or significantly altered performances. California’s AB 2602, enacted in September 2024 and effective from January 1, 2025, mandates contractual consent and proper representation for performers before any use of their digital replica. This law extends protection beyond SAG members to all performers in the state, reflecting the growing impact of AI in entertainment.

Digital replicas, or deepfakes, are defined differently across states. California describes them as highly realistic, computer-generated electronic representations that are easily identifiable as an individual’s voice or visual likeness in a work they didn’t perform or one where their performance was fundamentally altered. New York’s definition focuses on digital simulations of a person’s voice or likeness that are so convincing a layperson couldn’t readily distinguish them from the authentic version. 

CBS News

AI-Generated Art Loses Copyright Battle in US Appeals Court

On Tuesday, a federal appeals court in Washington, D.C. upheld that AI-generated art without human input cannot be copyrighted under U.S. law. The court supported the U.S. Copyright Office’s decision to deny copyright protection for an image created by Stephen Thaler’s AI system, “DABUS,” stating that only works with human authors qualify for copyright. This ruling is part of ongoing efforts by U.S. officials to address copyright issues in the growing generative AI field. The Copyright Office has also rejected copyright claims for Midjourney-generated images. Unlike Thaler, who claimed his “sentient” system created the image independently, some artists have argued for copyrights on AI-assisted images they helped create. 

Yahoo

Spanish Regulators Target Unlabeled AI Content with Fines

Spain’s government has approved a bill implementing the AI Act, which imposes hefty fines on companies that fail to label AI-generated content. The proposed law, pending parliamentary approval, deems improper labeling a “serious offence” with fines up to €35 million or 7% of global annual turnover. Digital Transformation Minister Oscar Lopez stressed AI’s potential to improve lives but warned of its risks to spread misinformation and undermine democracy. The bill also bans subliminal techniques targeting vulnerable groups and prohibits AI-based classification of people using biometric data for determining benefits or assessing crime risk. However, authorities can still use real-time biometric surveillance in public spaces for national security. The new AI supervisory agency, AESIA, will enforce these regulations, with specific watchdogs overseeing cases in areas like data privacy, crime, elections, credit ratings, insurance, and capital markets.

Euro News

Meta Faces Copyright Lawsuit from France’s Creative Industry

A group of French publishers and authors announced legal action against Meta. They allege the tech giant used their creative works without authorization to train its artificial intelligence systems. This lawsuit, filed in a French court, challenges Meta’s practices in developing its AI models, raising questions about copyright infringement and fair use for AI development. The case highlights growing tensions between content creators and tech companies over the use of copyrighted material in AI training. 

The National Publishing Union (SNE), Société des Gens de Lettres (SGDL), and National Union of Authors and Composers (SNAC) are pursuing this action on principle, arguing that AI market growth shouldn’t harm the cultural sector. They demand the removal of unauthorized data used for AI training, stronger creator protections, and compensation for those whose work was used. The AI Act requires generative AI systems to meet transparency and copyright law standards, disclose AI-generated content, prevent illegal content creation, and publish summaries of copyrighted training data. The complainants allege Meta has breached these obligations.

AP News

Deepfake Technology Market to Hit $13.9 Billion as Entertainment and Media Embraces AI

Since 2023, the Deepfake AI Market has experienced rapid growth, driven by its adoption in media, entertainment, and cybersecurity. Social media, law enforcement, and digital marketing sectors are increasingly using deepfake technology to create customized, engaging content. Despite improvements in AI-based detection tools, deepfakes continue to pose challenges for verifying authenticity, raising concerns about identity fraud, political misinformation, and fake news spread. The market’s development is influenced by regulatory measures, ethical AI considerations, and the rise of deepfake-as-a-service platforms. These factors will significantly impact the technology’s future, determining whether it becomes a tool for innovation or remains a potential security threat.

In 2023, the deepfake AI market was dominated by the media and entertainment sector. Major studios and streaming platforms have increasingly adopted AI-generated content to enhance viewer engagement. The technology found applications in reviving historical figures, modifying actors’ performances, and providing dubbing in multiple languages. 

SNS Insider

Preserving Documentary Accuracy in the Age of Unregulated AI

“Some in the film world have met the arrival of generative AI tools with open arms. We and others see it as something deeply troubling on the horizon,” the co-directors of the Archival Producers Alliance, Rachel Antell, Stephanie Jenkins, and Jennifer Petrucelli, wrote on March 1 in the Los Angeles Times. They argue that the increasing use of AI-generated content in documentaries has raised serious concerns among archival producers. This concern is shared by others in the industry, as evidenced by Variety’s report that the Motion Picture Academy is considering mandating the disclosure of generative AI use for Oscar contenders. While this disclosure is important for feature films, it’s critical for documentaries. The APA co-directors began noticing synthetic images and audio in historical documentaries they were working on in early 2023. The lack of transparency standards is alarming, as they fear the mixing of real and artificial content could undermine the nonfiction genre and its crucial role in preserving collective history.

In February 2024, OpenAI introduced Sora, its new text-to-video platform, with a clip titled “Historical footage of California during the Gold Rush.” It resembled a classic Western with a happy ending, but it was entirely artificial. OpenAI presented this clip to demonstrate Sora’s ability to create videos from user prompts using AI that “understands and simulates reality.” However, the video is not reality but a mishmash of real and Hollywood-imagined imagery, reflecting the biases of the industry and archives. Like other generative AI programs, such as Runway and Luma Dream Machine, Sora sources content from the internet and digital materials. Consequently, these platforms merely recycle the limitations of online media, likely amplifying existing biases in the process.

LA Times

Ginger Liu is the founder of Hollywood’s Ginger Media & Entertainment, a researcher in artificial intelligence and visual arts media, and an entrepreneur, author, writer, artist, photographer, and filmmaker. Listen to the Podcast — The Digital Afterlife of Grief.

Leave a Reply

Your email address will not be published. Required fields are marked *