Elvis, Edith Piaf, Maria Callas, and George Carlin are all making an AI comeback, and de-aging for Hollywood actors is set to be the norm after Indiana Jones.
For the last few years, I have been writing about grief and digital afterlife technology and the companies that create photorealistic digital twins, chatbots, synthetic voices, AI video legacies, and holograms. The entertainment industry has been a driver of innovation, creating digital personas of Tupak Shakur and ABBA. In only the last few weeks, Elvis, Edith Piaf, and Jimmy Stewart have been resurrected or de-aged for entertainment. Harrison Ford was de-aged by 40 years for his role as Indiana Jones in last year’s Indian Jones and the Dial of Destiny.
Elvis is in the Building for a New AI Hologram Show
Elvis Presley is going back on the road as an AI hologram and will perform later this year in London. Layered Reality is the British company behind the project which looks to piggyback on the success of ABBA Voyage in 2023. The technology uses AI and holographic projection, augmented reality, and live theatre to recreate the events in the later singer’s music and life.
Presley died in 1977 and has enjoyed numerous posthumous comebacks. He is still the best-selling music artist of all time with more than 500 million records sold. Layered Reality was given access to thousands of the singer’s photos and home videos from the Elvis Presley estate to create the singer’s likeness. Presley never performed in London during his lifetime and Layered Reality aims to showcase “…a joyous celebration of Elvis’s life; the man, the music, and his cultural legacy,” according to Layered Reality founder and chief executive Andrew McGuinness.
According to the website:
“The show peaks with a concert experience that will recreate the seismic impact of seeing Elvis live for a whole new generation of fans, blurring the lines between reality and fantasy.”
We can see how this technology will one day be available to create AI hologram replicas of our deceased, to perform and converse with the bereaved.
De-Aging Indian Jones to Feed Our Nostalgic Hollywood Fantasy
In June 2023, Harrison Ford returned to the cinema as Indiana Jones in Indiana Jones and the Dial of Destiny. What made the film unique was a 79-year-old Ford, de-aged for the 25-minute opening sequence. I must admit that I had reservations because I didn’t want one of the most iconic characters in cinema history to be failed by bad SFX. I was wrong. Sure, it wasn’t perfect, it wasn’t Indiana Jones and the Raiders of the Lost Ark, and there was little of Ford’s human charisma, but after 25 solid minutes, I bought into the fantasy of seeing our handsome hero fight off Nazis like it was 1981. Consumed by a wave of nostalgia, the if onlys turned into some kind of grief. What a shame, I thought, that this wasn’t a young Harrison Ford starring in another Indiana Jones movie. In this role, he truly was the definitive movie star hero.
In an interview with the film’s director James Mangold for Total Film, Ford acted out scenes like he was 35. For the de-aging process, LucasFilm had hundreds of hours of footage of Ford from the original films. Instead of taking weeks, and months to complete the SFX, the results of Ford’s de-aging took hours.
“We had hundreds of hours of footage of him in close-ups, in mediums, in wides, in every kind of lighting, night and day…I could shoot Harrison on a Monday as, you know, a 79-year-old playing a 35-year-old, and I could see dailies by Wednesday with his head already replaced,” says Mangold.
Mangold’s goal in the opening 25 minutes was to remind all of us what was so good about Harrison Ford’s 1981 Indiana Jones persona. The nostalgia ripped.
“The goal was to give the audience a full-bodied taste of what they missed so much. Because then when the movie lands in 1969, they’re going to have to make an adjustment to what it is now, which is different from what it was,” Mangold.
Back in April last year, Apple’s Machine Learning Research posted its findings on FaceLit: Neural 3D Relightable Faces. FaceLit can generate 3D portraits from a mugshot and the user can change the angle of illumination at any point. FaceLit aims to create realistic lighting for photorealist faces, bringing Hollywood technology to our smartphones. Here’s what the research says:
“FaceLit is capable of generating a 3D face that can be rendered at various user-defined lighting conditions and views, learned purely from 2D images in the wild without any manual annotation. Unlike existing works that require careful capture setup or human labor, we rely on off-the-shelf pose and illumination estimators. With these estimates, we incorporate the Phong reflectance model in the neural volume rendering framework. Our model learns to generate shape and material properties of a face such that, when rendered according to the natural statistics of pose and illumination, produces photorealistic face images with multiview 3D and illumination consistency. Our method enables the photorealistic generation of faces with explicit illumination and view controls on multiple datasets.”
AI-Generated George Carlin Has Upset Comedy Fans and Family
While the new Indiana Jones film showed the possibilities of Hollywood, actors, and AI working in tandem, this next reincarnation is not only flat but highlights what the recent SAG-AFTRA strikes were fighting for when it comes to persona IP.
Beloved stand-up comedian George Carlin, who died 15 years ago, has been resurrected for an AI-generated one-hour special called “George Carlin: I’m Glad I’m Dead.” The jokes are contemporary but the impersonation is a bit off. AI comedy podcast and YouTube show, Dudesy is behind the special, with Will Sasso and Chad Kultgen. To apologize in advance for the poor Carlin impersonation, and especially for legal reasons, Dudesy states that it is not copying the original human comedian but generating an impersonation of him.
“I listened to all of George Carlin’s material and did my best to imitate his voice, cadence, and attitude, as well as the subject matter I think would have interested him today. So think of it like Andy Kaufman impersonating Elvis or like Will Ferrell impersonating George W. Bush.”
George Carlin’s daughter, Kelly Carlin took to Twitter (Fuck X) soon after it was released last week. “No machine will ever replace his genius.” Carlin then directed her Tweet to the children of late comedians, including Robin Williams and Joan Rivers, “We should talk. They’re coming for you next,” she wrote.
“Let’s let the artist’s work speak for itself. Humans are so afraid of the void that we can’t let what has fallen into it stay there. Here’s an idea, how about we give some actual living human comedians a listen to? But if you want to listen to the genuine George Carlin, he has 14 specials that you can find anywhere,” Kelly Carlin.
AI-Generated News Anchors Aim to Present Truth with Fake Humans
Another negative use for AI personas is from Los Angeles-based news station Channel 1, which is creating digital human broadcast anchors to present the news. The AI personas will be launched later this year on free ad-supported streaming TV, including Tubi, Pluto, and Crackle. Channel 1 founder Adam Mosam aims to reassure viewers that the station will, “create a responsible use of the technology.” The irony isn’t lost for truth in journalism presented by a fake anchor.
Edith Piaf’s Likeness Recreated by Artificial Intelligence for New Film
Warner Music Entertainment and production company Seriously Happy are developing an AI feature project about France’s most famous singer, Edith Piaf. The AI technology has been trained on hundreds of Piaf’s images and voice clips. According to Warner Music, artificial intelligence will enable Piaf’s “distinct voice and image to be revived to further enhance the authenticity and emotional impact of her story.” Describing AI-generated images and voice in terms of ‘authenticity’ is contradictory and confusing. To describe anything created by AI as authentic is ludicrous. The secret is in the term artificial intelligence. What is created by AI is artificial. What is authentic are the actual audio and visual recordings of one of the world’s most famous singers.
The recent SAG-AFTRA deal is aimed at protecting an individual’s likeness or voice as intellectual property, protecting personal IP from recreation by AI for commercial purposes. Piaf’s estate has okayed the biopic and the capture of her image and voice for AI manipulation and creation in the film. Piaf’s AI-generated voice will narrate the animated film but will include actual footage and recordings of her songs and is set in Paris and New York between the 1920s to the 1960s. The film will include a mix of archival footage, TV and stage performances, interviews, and personal footage.
I shared the Piaf article with my network and there was a mixed bag of responses but many expressed anger. I too shared my skepticism for an estate wanting to cash in. The estate tried to convince themselves and Piaf’s fans that recreating her with AI is what she would have wanted to gain new fans. Reassuringly, recordings from her original songs will be used in the film. One would hope that AI doesn’t dare touch “Non, je ne regrette rien and “La Vie en rose.” Maybe the fans, myself included, are being a bit harsh because generative AI is a new technology. The wonderful film biopic La Vie en Rosewith Oscar-winner Marion Cotillard dramatized Piaf’s short life. It was fiction based on fact; a different technology recreating and embellishing the truth.
Maria Callas XR/AI Recreation Marks 100th Anniversary
Maria Callas’s likeness has been recreated with AI technology as part of an ongoing XReco European project. Vienna-based production company FFP also recreated the interior of the Budapest Opera House where Hollywood actor, Angelina Jolie recently filmed scenes for a new film Maria about the opera singer’s life. Her synthetic image was reconstructed from 50+ internet images, generative AI, AI-based shape estimation methods, traditional view-based 3D modeling, and a trained Stable Diffusion model to pose Callas from any angle. Callas was one of the biggest performers of the 20th century and the project aims to push new groundbreaking technology while introducing Callas to a contemporary audience.
Maria Callas — watch the video
Ginger Liu is the founder of Ginger Media & Entertainment, a Ph.D. Researcher in artificial intelligence and visual arts media — specifically death tech, digital afterlife, AI death and grief practices, AI photography, entertainment, security, and policy, and an author, writer, artist photographer, and filmmaker. Listen to the Podcast — The Digital Afterlife of Grief.