January 2, 2025
Photo by Fons Heijnsbroek, abstract art on Unsplash

The development of AI technology raises significant ethical concerns about grief and mourning. The implications of creating AI avatars for the deceased without permission is a legal and moral issue that some death tech companies have addressed in their terms and conditions.

Beyond current limitations, the advancement of AI technology suggests that avatars will improve over time and will be more realistic and personalized. Although this sounds like a positive step in preserving the memory of our dead loved ones, prolonged grief could be a debilitating outcome if the dead are recreated in exact detail.

Other ethical considerations are at play. Who owns the avatar after the individual’s death and how secure is their data? What happens if an avatar that is shared between family members ends up on the internet and is shared by strangers? Would you want your parent’s likeness to be repurposed as new content by a third party? Yet, the same thing could and does happen to our tangible objects after we die. Markets are full of old discarded photographs of long-lost families.

There’s no legal defense about these photographs being used by third parties. Unknown family memorabilia for sale on market stalls is pretty much the same as digital and AI images lost on the internet, it’s just a different technology that creates, produces, and shares images. But while the family is alive, the potential psychological impact on the bereaved interacting with AI personas or replicas of their dead loved ones raises important ethical questions with few answers because the technology is so new.

Mark Zuckerberg’s Metaverse platform could change how we communicate with our deceased loved ones. In an interview with podcaster Lex Fridman about the company’s VR platform, Zuckerberg predicts that the digital afterlife is the future of virtual reality. Meta’s new technology can scan users’ faces to build 3D virtual models.

Zuckerberg acknowledged that there is demand for creating a virtual version of a dead person with AI and VR technology. “If someone has lost a loved one and is grieving, there may be ways in which being able to interact or relive certain memories could be helpful,” said Zuckerberg. He also acknowledged that such communication could become unhealthy.

Zuckerberg’s ideas are not new, nor are they his own because several death tech platforms are competing for a piece of the pie. Storyfile created interactive holograms of Holocaust survivors and aims to provide this technology as a service to customers. I wrote about Storyfile last year and Zuckerberg echoed these ideas on his platform, Meta was “focused on building the future of human connection” where people could communicate with hologram replicas and AI bots built of their deceased loved ones.

Death and the digital afterlife will prove to be a lucrative venture for Zuckerberg because it is predicted that Facebook will have more dead people than alive by 2100, an incredible 4.9 billion. Providing a fee-based service that can communicate with these dead is a savvy business decision.

AI death technology is relatively new and it may be some time before research concludes if the outcomes of digital afterlife communication help or hinder grief. Social media platforms like Facebook have been included in numerous studies about prolonged grief and how we maintain continuing bonds. Facebook is a platform of communication, whereas AI, specifically generative AI can replicate a moving and talking human identity, so the response to grief will be different.

Research on the ethics of death bots suggests that there are pros and cons to using AI grief bots to support the grieving process. I would add that how we deal with grief is different for everyone and depends on how the deceased died, our relationship with the deceased, and our coping mechanisms. Generative AI tools for creatives describe the technology as a collaborative tool and this term is well suited to describe how AI bots will be used for people with mental health issues that include grief.

Facilitating how we grieve and communicate is not our only mental health concern. MIT and Arizona State University researchers conducted a study examining a person’s prior beliefs about an AI agent, like a chatbot, and how that belief could affect how we interact with AI.

Researchers primed participants with a mental health chatbot that was either manipulative, emphatic, or neutral. Knowing these factors influenced how users interacted with the chatbot, even though it was the same chatbot in all instances. Interestingly, most users gave the caring chatbot higher marks than the manipulative one.

Deepmind former co-founder, Mustafa Suleyman was asked about AI’s role in mental health support in a Guardian interview:

“I think that what we haven’t really come to grips with is the impact of … family. Because no matter how rich or poor you are, or which ethnic background you come from, or what your gender is, a kind and supportive family is a huge turbo charge…And I think we’re at a moment with the development of AI where we have ways to provide support, encouragement, affirmation, coaching, and advice. We’ve basically taken emotional intelligence and distilled it. And I think that is going to unlock the creativity of millions and millions of people for whom that wasn’t available.”

Suleyman’s book, The Coming Wave, shares strategies for containing AI. Government policy in the US, UK, EU, and elsewhere has had to write and rewrite guidance for AI administration and practice on the one hand and regulation and security on the other. How health services use AI for mental health practice and how tech companies safeguard how their services are used to prevent self-harm is still an ongoing conversation.

Mindbank Ai creates a digital twin from the information users feed into the platform with a chat interface and learning algorithms. The digital twin learns by asking questions and sharing insights into the user’s personality and, according to the website, the twin can live forever through data. I’m at the age that if I don’t know my personality by now then it’s because I’m in denial. And what of the glaringly obvious problem with promising to live forever in the metaverse? What if users stop paying or die and stop paying? What if the company goes bust? I am not trying to single out Mindbank AI because most death tech competitors like Storyfile, are a SAAS service that charges a fee.

The evolution of technology has shaped the way we grieve, from the early days of photography, which brought images of our loved ones into our homes, to Facebook Memorial pages that serve as spaces for remembrance. Incorporating AI into the grieving process seems like a natural progression. Adding a layer of interaction can be comforting, especially during the initial stages of loss. For those who missed a chance to say goodbye, griefbots can offer closure.

As we jump into an AI-driven future, the stories, memories, and voices of the past continue to resonate within the algorithms of our digital world. Whatever technology we use, ultimately the essence of human connection and the significance of genuine memories remain central to human agency.

Ginger Liu is the founder of Hollywood’s Ginger Media & Entertainment, a researcher in artificial intelligence and visual arts media, and an entrepreneur, author, writer, artist photographer, and filmmaker. Listen to the Podcast — The Digital Afterlife of Grief.

Leave a Reply

Your email address will not be published. Required fields are marked *