November 20, 2024

AI Avatars, Death Bots, Grief, and the Creation of an AI Essence

Photo by Dan Cristian Pădureț on Unsplash

I interviewed Barnabas Takacs, PhD of FFP Productions about his latest project, Maria.3 The Maria Callas 100th Anniversary Challenge set out to fill a virtual theater with an audience of synthetic humans or digital twins for a virtual concert that coincided with the release of the Hollywood movie Maria. The creation of a photorealistic Maria Callas and her audience, aimed to demonstrate how synthetic humans could be used to captivate audiences in “visually stunning, emotionally, moving, inclusive ways.”

FFP Productions’ goal was to explore 3D production technologies and different motion capture technology tools to build synthetic humans that can accurately and realistically replicate human movements, facial expressions, and voice to create and publish more believable and realistic characters. What drew me to the project was the use of low-resolution images, taken from the internet, to create the likeness of the singer and Takacs use of the term essence when describing the outcomes of the project.

The Maria demo demonstrated how Stable Diffusion could “re-imagine” a person from any angle, style, and environment by using text commands. By translating images into words and transforming those words by “capturing their essence they create new meanings before turning them back to images again.” This creates photorealistic renderings of the face, which are then imported back to the 3D film pipeline, which estimates head shape, layering the extra information needed for virtual production, to re-imagine Maria Callas for a 21st-century audience that engages with the past. Tatacs future predictions for this technology are AI-generated 3D-looking images, imagined, not rendered, diffusion model content that is generated on demand and evolves in real-time based on our desires.

We desire to bring back to life our deceased loved ones.

Fig 1. FFP Production still of Maria. 2023 (online)

FFP Productions also created photorealistic virtual humans to represent future faces in the metaverse, called metahumans. Photographic headshots are rendered at variable lighting conditions in real time. The sequence demonstrates racial diversity and different appearances, changes in skin color, hairstyles, and facial shape. The aim of the artificial face recognition and search algorithm of photorealistic virtual metahumans was to create a visual prediction of future faces in the metaverse. This is echoed in a National Geographic5 article from the 1990s, which I found in my father’s autobiography notes which showed the composite facial features of a future multi-racial America. My father is bi-racial English and Chinese, and I was born in the United States and raised in England.

Transferring the technology from both of these examples for my practice had some issues. Synthetic humans have development problems, such as inconsistent data integration and quality from different sources. Creating a synthetic human or digital twin for my practice was not only labor intensive, requiring engineering skills, but also costly. Using low-resolution images from different sources to create photorealistic versions of myself was the takeaway from this project.

I therefore took my practice in a different direction. Instead of creating a synthetic human of my persona, which by definition seemed far removed from the technology used to create a visual story or legacy of my life, I created a fusion of technologies that reflected the historical use of still photography in identity representation. Photographic technology has captured moments in my life and situated my standing within the family, and the wider world. It made sense to use the images in my archive to create a legacy persona with AI technology. By fusing the passive still images from my family archive with AI animated video and synthetic voice synthesis, could I create the story of my life for future generations that represented my true essence, creating perhaps a posthumous essence that could help or hinder the bereaved?

To create my digital afterlife or posthumous essence, I used the D-ID AI platform Deep Nostalgia, licensed by the genealogy site, My Heritage, and the AI conversational video platform with Eleven Labs AI synthetic voice software. The content framework was created with family snaps and Facebook images from my archive, staged studio self-portraits, and video stills. I authored the text which was fed into the synthetic voice software to create an AI version of my voice, which was used to narrate fifteen video sequences. Each sequence is created by a deep learning process with two main stages. First, it recognizes the face in a frontally posed photograph and second, it applies a video driver that animates the face.

Deep Nostalgia’s company website sets out to distinguish itself from deep fake applications, the manipulation software designed “to deceive or mislead viewers by using bad faith actors for personal appropriation,” and invites users to only upload photographs of their deceased ancestors, declaring that Deep Nostalgia brings “ancestors back to life.” The platform makes further claims for providing services that search for personal identity, historical provenance, and primary kinship, “experience your family as never before.” D-ID founder, Gilda Japhet claims that Deep Nostalgia provides “new ways to allow people to connect to their family history.” I uploaded a selection of still photos from my archive with me as the solo subject of the images. These unique and personal images, taken together, should represent a snapshot of my identity. Instead, the AI software created a set of uniform animated expressions. These expressions are based on a data source of prerecorded driver videos of twenty My Heritage employees.

Deep Nostalgia constitutes a new “indexical generated hybrid,” combining two different indexicalities -the historical photograph, which indexically registered the faces of those in the past in the pre-photographic scene, and the video which has indexically registered the facial expressions and movements of My Heritage employees. Therefore, technology diminishes individual differences, perhaps essence.

In two of the video sequences, the synthetic voice synthesis fails to accurately replicate my voice and reminds the viewer that despite the remarkable capabilities of this new technology, AI can’t hide from the complexities of being human.

Fig 2. Video still of Digital Afterlife of Grief. 2023 (online)

Fig 3. Video still of Digital Afterlife of Grief. 2023 (online)

Fig 4. Video still of Digital Afterlife of Grief. 2023 (online)

Victorian postmortem photography created the illusion of the dead as sleeping, denying death. Deep nostalgia denies death through the illusion of wakeful, animated people. Deep Nostalgia/my work presents a hybrid of photography and moving images, that questions our cultural perception of the still photograph as death and movement as life. The photograph is the past frozen in time, “that-has-been.” Deep Nostalgia/my work is a stillness death versus motion life, an animated stillness, an animated death. Animation is Latin for soul and life. Deep Nostalgia/my work, is a performance animation that simulates action.

Deep Nostalgia/my work brings family photographs to life like magic, it even uses a wand icon while users wait for their photograph to be transformed. Photography in the 19th century was also thought to be some form of magic, a technology that blended science and the spiritual to create an optical illusion, such as the popular spirit photographs of the time. Barthes described photography as an “emanation of past reality: a magic, not an art.”

Deep Nostalgia/my work, creates the impossible as real, giving animated photographs of the dead, imaginative and emotional power, and desire to see our dead relatives return to the living. Deep Nostalgia/my work is a mash-up of chronological time, being neither a still photograph from the past nor a moving image of the present, an uncanny valley of epic proportions.

The uncanny is frightening because it is unknown, yet familiar. Deep Nostalgia is the presence of the restless dead in motion. Does grieving return to the present when death returns to the present tense?

The driver videos create movement in still photos, creating a sense of presence, a presence through movement. While the still photo represents a specific ancestor, the final animation does not, since the facial movements attached by the driver video originated from My Heritage employees. It creates universalism from a tiny data set and overrides familiar individualism, essence, and standardizing identity. Big tech creates a universal generic identity. Generic facial expressions, animations as generalizations, ideal types.

Deep Nostalgia and other AI tools show how algorithmic culture is being produced and how photo-based AI software is used as an existential medium. By watching our ancestors come technologically alive it gives the bereaved self-reference for their existence and a reflection of presence and limited existence. When life and death are mediated online, there is a digital visual desire to overcome death. Limit situation of human mortality. Part of a wider cultural phenomenon of big tech intervening and controlling temporality and infinite human existence.

The algorithmic culture of AI-driven technology, “offers a concrete visual performance of how deeply seated human longings can be addressed and expressed through systems of computational imaging.” Algorithmic technologies articulate existential human desires, and cultural forms of human existence, and further research is needed to investigate how death is constructed, negotiated, and managed by and through media technology in the 21st century, technology that automates and supplants human emotional response and imagination.

If limit situation is the death boundary of existence, how is AI technology devised to represent and overcome death? Jaspers defended death as a “limit situation,” a consciousness of death that threatens life. Digital media has shaped the material and symbolic world we inhabit, creating a new digital limitation or digital limit situations. Photography-based AI software like Deep Nostalgia is an existential medium. Watching our ancestors technologically come back to life, provides a self-reference to our existence through relation and our limit situation.

With digital networks and AI, there is a shift in identity once the person has died and for those left behind to mourn. Introducing the deceased back into the grieving process means that digital identity is often co-constructed.

The shift is in who controls the grieving process. Whereas Facebook group pages are created by a human user, such as a family member or friend who posted memories in the form of images, videos, and text, AI death tech companies are directing humans to a formula of identical posthumous identity making, a one size fits all created by small data sets and standardized questions, therefore controlling posthumous digital identity.

My practice aims to create my essence using death tech company software, photorealistic applications, grief bots, and synthetic audio to ascertain if a posthumous essence accurately reflects the living after they are deceased and discover the implications for grieving and the bereaved.

Ginger Liu is the founder of Ginger Media & Entertainment, a Writer/Researcher in artificial intelligence and visual arts media — specifically Hollywood, death tech, digital afterlife, AI death and grief practices, AI photography, entertainment, security, and policy, and an author, writer, artist photographer, and filmmaker. Listen to the Podcast — The Digital Afterlife of Grief.

Ginger Liu is a writer who covers the latest developments in artificial intelligence, entertainment, and art.

Leave a Reply

Your email address will not be published. Required fields are marked *