April 24, 2025

Generative AI Therapy Could Aid Depression, Global AI Companion Market Expected to Reach $521B by 2033

Photo by Denis Bayer on Unsplash

Can AI truly capture the essence of a loved one, or is it simply a commercial product that exploits grief? The financial potential is substantial, as tech companies increasingly turn death into a market category. Critics warn that grief could become a commodity, with some features marketed as premium services, creating new forms of emotional and financial dependency.

This sector offers services that let users upload messages, voice recordings, and videos to create interactive avatars of the deceased. The industry’s roots go back nearly a decade, but the rapid development of generative AI has fueled its recent growth and raised deeper ethical, psychological, and philosophical concerns. Some argue that digital “ghosts” may help people grieve, while others worry these technologies blur the line between memory and simulation and raise questions about consent and ownership. As digital afterlife services become more common, society faces new questions about how we remember the dead and what it means to maintain relationships with digital replicas of lost loved ones.

Artificial intelligence is opening up new territory in grief tech, but Cambridge researchers warn that AI chatbots designed to mimic the dead — so-called “deadbots” or “griefbots” — could do real psychological damage if left unchecked. These bots, fueled by a loved one’s digital footprint, can hold eerily convincing text or voice conversations that risk blurring the line between comfort and haunting.

A new study from Cambridge’s Leverhulme Centre for the Future of Intelligence, published in Philosophy and Technology, sketches out a future where digital afterlife services are everywhere — and potentially high risk. The researchers lay out scenarios where deadbots could be weaponized for advertising, or even distress children by insisting a deceased parent is still alive. In the wrong hands, these bots could spam survivors with relentless notifications or reminders, making it feel like you’re being digitally stalked by the dead.

The psychological fallout isn’t just theoretical. Experts say repeated interactions with deadbots can trap mourners in a feedback loop, prolonging grief and turning closure into an emotional mirage. The study calls for urgent design protocols: clear opt-outs, age restrictions, and transparency so users always know they’re talking to an AI, not a ghost. As generative AI gets cheaper and more accessible, the ethical minefield only grows, raising tough questions about consent, dignity, and who controls our digital afterlives.

Cambridge University

Generative AI Therapy Could Aid Depression

The first clinical trial of a generative AI therapy bot just dropped, and the results are hard to ignore: Therabot, built by a Dartmouth team of psychiatric researchers and psychologists, went head-to-head with human therapists — and held its own. In a study published March 27 in NEJM AI, participants with depression who chatted with Therabot saw their symptoms drop by 51%.

However, people reported forming therapeutic bonds with Therabot that rivaled human professionals. That’s a big deal in a space crowded with AI therapy apps promising always-on, judgment-free mental health support, at a fraction of the cost of a human. But don’t cue the robot therapist revolution just yet. The researchers caution that, while the data is promising, AI therapy isn’t ready to replace your shrink. Still, with engagement levels this high and outcomes this strong, the line between human and machine care just got a little blurrier.

NPR

Senators Want Answers From AI Companion Apps After Child Safety Concerns

A letter from senators Alex Padilla and Peter Welch demanded that artificial intelligence companies disclose their safety protocols. This comes after a series of lawsuits from families, including a Florida mother whose 14-year-old son died by suicide, who accused Character. AI’s chatbots of causing harm to children. In a letter sent Wednesday to Character Technologies (maker of Character.AI), Chai Research Corp., and Luka, Inc. (maker of Replika), the senators requested details on safety measures and how the companies train their AI models.

Platforms like Character.AI, Chai, and Replika allow users to create or interact with custom chatbots that can take on a wide range of personas. In December, two families filed lawsuits against Character.AI, alleging the platform exposed their children to sexual content and encouraged self-harm and violence. Character.AI responded by implementing new trust and safety measures, including a pop-up directing users to the National Suicide Prevention Lifeline when self-harm or suicide is mentioned. The company has also hired a head of trust and safety and increased its content moderation staff.

LA Times

Global AI Companion Market Expected to Reach $521B by 2033

The global AI companion market was worth about $260 billion in 2024 and is expected to reach $521 billion by 2033, growing at an annual rate of around 36.6%. More people are turning to digital companions — like chatbots and virtual assistants — for conversation, emotional support, and help with daily tasks. North America holds a significant market share, supported by early adoption and a robust technological infrastructure, while the Asia Pacific region is experiencing rapid growth due to a large, tech-savvy consumer base.

KBV Research

Chinese Mourners Use AI Avatars to Chat With the Dead

As the annual Qingming Festival approached — a time when millions in China honor their ancestors — Zhang Ming found himself speaking with his late grandfather. But this reunion didn’t happen at a gravesite. Instead, Zhang, a resident of Tianjin, launched an app called Lingyu, uploading old photos, voice clips, and bits of family stories. The result: a digital avatar powered by generative AI, capable of chatting in his grandfather’s regional dialect, even appearing on video.

The death tech industry in China is booming, with platforms selling everything from basic voice clones to full-on video calls with AI versions of the departed. Lingyu’s founder, Gao Wei, says the app uses cutting-edge tech to simulate emotionally intelligent conversations. In just two months, nearly 10,000 users have signed up, with hundreds opting for the paid Digital Life service.

Looking at the broader landscape, the virtual human industry in China, which includes digital afterlife services, is projected to reach 270 billion yuan (about $37 billion) by 2030. The core market size for virtual digital humans is expected to hit 48 billion yuan (about $6.6 billion) in 2025.

China Daily

Ginger Liu is the founder of Hollywood’s Ginger Media & Entertainment, a researcher in artificial intelligence and visual arts media, and an entrepreneur, author, writer, artist, photographer, and filmmaker. Listen to the Podcast — The Digital Afterlife of Grief.

Leave a Reply

Your email address will not be published. Required fields are marked *