Hollywood No Fakes Act, authors sue for AI copyright infringement, deepfake porn, and Other News
Generative AI and copyright are still causing major concern for Hollywood creatives and artists across industries. The proposed “No Fakes Act” aims to protect actors and singers from entertainment giants using their personal IP without permission. A Class Action by Authors has accused the likes of Meta and Bloomberg of scraping their content without permission. A report reveals that the majority of deepfake videos are pornographic and use images without permission. You can see the common thread here.
US Senators Propose Hollywood No Fakes Act
The U.S. Senate has pushed a bill advocated by the entertainment industry that would “protect the image, voice, and visual likeness of individuals.” The bill is set to prevent the use of artificial intelligence using the likeness and voices of actors, singers, and other artists without permission. The “No Fakes Act” (Nurture Originals, Foster Art, and Keep Entertainment Safe Act), was announced by four senators and supported by Hollywood creatives who are increasingly concerned about their likenesses being stolen by AI. Both Democrats and Republicans have endorsed the act which would ban companies from copying “the image, voice, and visual likeness of individuals” without the artist’s permission. Or put another way, artists “shall have the right to authorize the use of the image, voice, or visual likeness of the individual in a digital replica.”
Google Will Defend Generative AI Users Against Copyright Claims
Google will back Workspace and Google Cloud users against IP property lawsuits related to generative AI use. The tech company follows similar steps taken by Microsoft when it announced its Copilot Copyright Commitment, which promised it would take the fall for legal risks associated with copyright infringement related to AI-generated content. Google will provide indemnity for third-party copyright claims related to training data and generated output. The company will not be responsible for copyright infringement if a user uploads copyrightable content.
Disney’s Loki Poster in Generative AI Storm
Disney has been accused of using generative AI to create the promotional poster for the second season of Loki. The image used has been linked to the stock image platform Shutterstock, which would break the site’s licensing rules for AI-generated content. Illustrator Katria Raden flagged the image on X/Twitter, claiming that the clock squiggles are AI-generated and have been pulled from the Shutterstock catalog. Other users have said they have purchased the stock image which was released in the past year. AI image checkers have also flagged the image as AI-generated. Shutterstock’s rules specify that AI-generated content can not be licensed unless it is created using its own AI-image generator tool. This is to prove IP ownership of all content. Disney has since contacted The Verge, stating that their artist had not used AI to create the image.
Authors Sue Big Tech Over AI Copyright Infringement
A proposed class-action copyright lawsuit has been filed by a group of writers who accuse Meta, Microsoft, and Bloomberg of using their work to train AI systems without permission. Authors Mike Huckabee, Lysa Terkeurst and writers Tsh Oxenreider and John Blase, and others, told the court that their books were used to train Meta’s Llama 2 large-language model, which was developed between Microsoft and BloombergGPT. The Books3 dataset is accused of containing thousands of pirated books. The lawsuit also accused AI research group EleutherAI of copyright infringement for allegedly providing this data set for companies’ systems like Books3.
The majority of Deepfake Videos are Pornographic
According to the 2023 State of Deepfakes report, there has been a 550% increase in the number of deepfake videos online, with the majority being pornographic. This increase is from 2019, before easy access to generative AI video technology. Deepfake’s are created using generative AI and can recreate a hyper-realistic person and scenario through video, still images, or audio. The report found that deepfake pornography accounts for 98% of deepfake content online. Victims of deepfake porn are used in blackmail scams and 99% of victims are women. The report identified 95,820 deepfake videos, 85 deepfake porn channels, and 100 websites linked to the deepfake community. It can take less than half an hour to create a free one-minute deepfake pornographic video using an image of the victim.
Cybersecurity Industry Alarmed by AI Attacks
Cybersecurity leaders are worried about the increasing use of AI by cyber-criminals, specifically deepfakes. According to a recent Integrity360 survey, 68% of cybersecurity professionals were concerned about deepfake attacks on their organization. Out of the 250 CIOs surveyed, most recognized that AI cyber-attacks were a threat to business but many have little understanding of AI’s impact on cybersecurity.
Google’s Search Generative Experience Can Create Images From Text Prompts
Google’s Search Generative Experience (SGE) can produce images from text prompts. The tool is powered by the Imagen AI models which can generate AI images from Google Images. Google’s new technology follows Microsoft’s AI image creator from Bing Chat using OpenAI’s DALL-E. The image-generating tool will have metadata labeling and watermarking to indicate it was created by AI. Users must be over 18 and can not create images that depict photorealistic faces of recognizable individuals.
Ginger Liu is the founder of Hollywood’s Ginger Media & Entertainment, a Ph.D. Researcher in artificial intelligence and visual arts media — specifically grief tech, digital afterlife, AI, death and mourning practices, AI and photography, biometrics, security, and policy, and an author, writer, artist photographer, and filmmaker. Listen to the Podcast — The Digital Afterlife of Grief.