December 21, 2024
Photo by Steve Johnson on Unsplash

When my mother was in the final stages of dementia, I rarely took photographs or videos of her because I didn’t want to document the evidence of her living with dementia. Documenting her demise was something I struggled with as an artist-photographer who was used to taking and sharing images on a daily basis.

This mindset changed after a hospital examination concluded that my mother had anywhere between two weeks and a few months left to live. From that moment in the hospital, I photographed my mother smiling back at me, oblivious to her diagnosis. The thought of losing her forever made me obsessed with capturing her essence.

But what is essence? Words like soul, aura, air, and core are familiar terms when describing that unique trait that a loved one has. For my mother, it was her laugh and the cheeky and mischievous look she gave. Dementia never robbed this essence from her and one could be fooled that she wasn’t suffering from this debilitating illness at all.

The desperation I felt in grief, and this need to capture the essence of my mother and create this continuing bond led me to my Ph.D. research studies in the digital afterlife and how technology from Victorian portrait photography to AI grief tech communicates identity and helps us remember our deceased loved ones.

My mother died in 2018 and she had been struggling with dementia for a number of years. In the early stages, I videoed her telling one of her many amazing life stories. How could future AI legacy technology be used to capture a life story before dementia takes those precious memories away?

Only the rich could afford a portrait painter who could grasp the uniqueness of an individual. By the time photography was affordable to the masses in the late 19th century and throughout the 20th century, family snaps filled photo albums, and often of deceased relatives. These albums were vessels of remembrance, identity, and storytelling. Death tech companies like StoryFile represent the recent shift from photography to AI as tools for remembering the deceased.

Mark Zuckerberg’s Metaverse platform could change how we communicate with our deceased loved ones. In an interview with podcaster Lex Fridman about the company’s VR platform, Zuckerberg predicts that the digital afterlife is the future of virtual reality. Meta’s new technology can scan users’ faces to build 3D virtual models.

Zuckerberg acknowledged that there is demand for creating a virtual version of a dead person with AI and VR technology. “If someone has lost a loved one and is grieving, there may be ways in which being able to interact or relive certain memories could be helpful,” said Zuckerberg. He also acknowledged that such communication could become unhealthy.

Zuckerberg’s ideas are not new, nor are they his own because there are a number of death tech platforms competing for a piece of the pie. Storyfile created interactive holograms of Holocaust survivors and aims to provide this technology as a service to customers. I wrote about Storyfile last year and Zuckerberg echoed these ideas on his own platform, Meta was “focused on building the future of human connection” where people could communicate with hologram replicas and AI bots built-in of their deceased loved ones.

Death and the digital afterlife will prove to be a lucrative venture for Zuckerberg because it is predicted that Facebook will have more dead people than alive by 2100, an incredible 4.9 billion. Providing a fee-based service that can communicate with these dead is a savvy business decision.

Part of my Ph.D. research aims to address Zuckerberg’s concern. AI death technology is relatively new and it may be some time before research concludes if the outcomes of digital afterlife communication help or hinder grief. Social media platforms like Facebook have been included in numerous studies about prolonged grief and how we maintain continuing bonds. Facebook is a platform of communication, whereas AI, specifically generative AI can replicate a moving and talking human identity, so the response to grief will be different.

Research on the ethics of deathbots suggests that there are pros and cons to using AI griefbots to support the grieving process. I would add, that how we deal with grief is different for everyone and depends on how the deceased died, our relationship with the deceased, and our own coping mechanisms. Generative AI tools for creatives describe the technology as a collaborative tool and this term is well suited to describe how AI bots will be used for people with mental health issues that include grief.

Facilitating how we grieve and communicate is not our only mental health concern. Researchers from MIT and Arizona State University conducted a study that examined a person’s prior beliefs about an AI agent, like a chatbot, and how that belief could affect how we interact with AI.

Researchers primed participants with a mental health chatbot that was either manipulative, emphatic, or neutral. Knowing these factors before, influenced how users interacted with the chatbot, even though it was the same chatbot in all instances. Interestingly, most users gave the caring chatbot higher marks than the manipulative one.

Deepmind former co-founder, Mustafa Suleyman was asked about AI’s role in mental health support in a Guardian interview:

“I think that what we haven’t really come to grips with is the impact of … family. Because no matter how rich or poor you are, or which ethnic background you come from, or what your gender is, a kind and supportive family is a huge turbo charge…And I think we’re at a moment with the development of AI where we have ways to provide support, encouragement, affirmation, coaching, and advice. We’ve basically taken emotional intelligence and distilled it. And I think that is going to unlock the creativity of millions and millions of people for whom that wasn’t available.”

Suleyman’s new book, The Coming Wave shares strategies on how to contain AI. Government policy in the US, UK, EU, and elsewhere has had to write and rewrite guidance for AI administration and practice on the one hand and regulation and security on the other. How health services use AI for mental health practice and how tech companies safeguard how their services are used to prevent self-harm is still an ongoing conversation.

Mindbank Ai creates a digital twin from the information users feed into the platform with a chat interface and learning algorithms. The digital twin learns by asking questions and apparently shares insights into the user’s personality and, according to the website, the twin can live forever through data. I’m at the age that if I don’t know my personality by now then it’s because I’m in denial. And what of the glaringly obvious problem with promising to live forever in the metaverse? What if users stop paying or in fact die and stop paying? What if the company goes bust? I am not trying to single out Mindbank Ai because most death tech competitors like Storyfile, are a SAAS service that charges a fee.

I haven’t used the platform yet but I will post my review in the next weeks. I do have questions about the purpose of it all. MindBank Ai aims to “solve humanity’s personal development and ageing challenge by creating a personal digital twin of your knowledge.” This aging ‘challenge’ is rather curious because how does this relate to creating a copy of personal knowledge? I am researching essence and I’m curious how a person’s knowledge can help to convey essence.

There are so many issues regarding ethics and data security, as well as the right to dignity for the deceased. This latter issue is one of the reasons for the decision to use my own image and voice to create an AI replica for my research and forthcoming documentary, The Digital Afterlife of Grief. This is why I find the statement from Justin Harrison, the founder of death tech company, You, Only Virtual, astonishing:

“You absolutely don’t need consent from someone who’s dead…My mom could’ve hated the idea but this is what I wanted and I’m alive.”

To be fair to Harrison, I’m sure he’s talking about individual families who want to share AI replicas of their deceased loved ones with each other in private. But what if this data is found on the internet and copied and used for something else that you have no control over? It doesn’t matter for those who are dead but it will cause a lot of heartache for the family members left behind.

Ginger Liu is the founder of Hollywood’s Ginger Media & Entertainment, a Ph.D. Researcher in artificial intelligence and visual arts media — specifically grief tech, digital afterlife, AI, death and mourning practices, AI and photography, biometrics, security, and policy, and an author, writer, artist photographer, and filmmaker. Listen to the Podcast — The Digital Afterlife of Grief.

Leave a Reply

Your email address will not be published. Required fields are marked *