Death Isn’t the End: AI Brings Lost Voices Back to Life

Summary: A new paper explores how generative AI is transforming the way we interact with the dead, from virtual reality reunions to lifelike digital avatars. These “generative ghosts” can remember, plan, and even evolve, offering real-time conversations that go far beyond pre-recorded memorials.

While the technology holds promise for comfort, creativity, and historical preservation, it also raises ethical questions about grief, consent, and digital identity after death. Researchers are urging society to begin serious discussions about how to responsibly shape this emerging frontier.

Key Facts:

  • Generative Ghosts: AI avatars can now simulate interactive conversations with the deceased, adapting and responding in real time.
  • Real Use Cases: Examples range from grieving parents reuniting in VR to posthumous courtroom testimony and celebrity song releases.
  • Ethical Challenges: The rise of AI afterlives raises questions about consent, grief processing, and ownership of digital legacies.

Source: University of Colorado

In 2019, a grieving mother named Jang Ji-Sun donned a virtual reality headset and was instantly transported to a grassy field where she spent 10 minutes playing with an AI version of her daughter, Na Yeon, who had died three years earlier of a rare blood disease.

The tearful reunion, viewed more than 36 million times on YouTube, offered a striking sneak peek at how technology might someday transform the way we interact with the dead. 

Thanks to the advent of generative AI technologies like ChatGPT, and the emergence of AI “agents” created to act independently on behalf of their creators, that someday is here, according to new CU Boulder research. And the possibilities are even wilder than many imagined.

“We anticipate that within our lifetimes it may become common practice for people to create custom AI agents to interact with loved ones and the broader world after their death,” writes Jed Brubaker, professor of Information Science, in a new paper titled  “Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives.”

Brubaker has spent much of his career at the intersection of death and technology. His research inspired Facebook’s Legacy Contact, the feature which enables platform users to assign someone to manage their account after they die. In November, he launched the nation’s first Digital Legacy Clinic, which helps people get their digital affairs in order.

For his latest paper, co-authored with Google DeepMind researcher Meredith Ringel Morris, he set out to inventory what’s been done and what’s coming in the nascent “AI afterlives” space. Meanwhile, in his lab on campus, Brubaker and his students have begun beta testing their own ‘AI ghosts’ and conducting experiments to test how people feel about them.

“Today, you might interact with a Facebook Memorial Page for grandpa after he dies,” he says. “But what would it feel like to actually sit down with grandpa by the fire and have a conversation with him?”

That day may not be far off.

From text-based grief bots to resurrected celebrities

As Brubaker notes, tech-savvy futurists have been dabbling with AI afterlives for years. 

After Velvet Underground frontman Lou Reed died in 2013, his partner Laurie Anderson worked with machine learning experts to create a text-based chatbot (trained with Reed’s writings, songs and interviews) that she could converse with. She still uses it frequently.

“I am totally, 100% addicted to this,” Anderson recently told The Guardian. 

In 2023, surviving members of The Beatles used AI to release a new song “Now and Then” featuring the deceased John Lennon’s voice singing along with his bandmates. 

Just last month, the family of a man shot dead in a road rage incident used AI to create a life-like avatar of him. During an emotional video played in the courtroom, the avatar forgave his killer.

Meanwhile, numerous startups now help the living create posthumous digital versions of themselves: After a lengthy 3D video and interview session, Re;memory will create a “highly realistic AI avatar” to leave behind for family members. HereAfter, an AI app, invites people to record audio stories that the “virtual you” can share after your death.

To some, this all sounds exceedingly creepy.

But Brubaker points out that photographs were once believed to steal a person’s soul, and online memorials— widely viewed as creepy a decade ago— are now everywhere.

“After time, what’s creepy often becomes commonplace,” he says.

The rise of generative ghosts

Brubaker is most intrigued about what’s coming next: He and his co-author term them “generative ghosts.”

Powered by large language models that can generate and understand human language, and other features that enable them to remember, plan and exhibit other complex human behaviors, they can do far more than regurgitate old stories fed to them by the once-living. 

For instance, they could have a conversation with their kids about current events which occurred after their death, write a new song or poem (that their family could potentially earn royalties from), or even help their kids manage their estate. 

Right now, most ‘generative ghosts’ are rudimentary, and text based. But ultimately, we could get very close to that candid chat with grandpa by the fire, Brubaker says.

“You could go interact with this super high-fidelity, interactive memorial, and instead of them just reading you some pre-scripted words, you could have an authentic conversation.” 

Promise and peril

Brubaker also images a day when generative ghosts could be used therapeutically for someone struggling with prolonged grief over a lost loved one.

This was, in fact, the impetus for Jang Ji-Sung’s heart-wrenching reunion with her deceased daughter. (After three years of battling mental health issues, she worked with a South Korean TV network to create a 3D version of Nayeon she could bid a final farewell to). 
Generative ghosts could also be used in historical exhibits.

“The last generation of Holocaust survivors will not be with us for much longer, so museums are trying to think of rich, interactive ways to keep their stories alive,” says Brubaker.

Along with such promise, of course, comes peril.

How long should someone interact with an AI ghost before it becomes unhealthy? What role should they play, or not, in the courtroom? What happens when they are created accidentally (when someone creates an AI “agent” to perform other tasks for them and then unexpectedly dies?), How can I be sure no one will make a ghost out of me, against my will?

And when and how should a generative ghost die?

Brubaker doesn’t have the answers. But he hopes his research will get tech companies and policymakers thinking.

“What’s possible and what will actually happen are two different things as we move forward in this AI world,” he says. “When it comes to AI afterlives, we hope to see things move forward in the most ethical, thoughtful and sensitive way possible.”

About this grief and AI research news

Author: Lisa Marshall
Source: University of Colorado
Contact: Lisa Marshall – University of Colorado
Image: The image is credited to Neuroscience News

Original Research: Open access.
Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives” by Jed Brubaker et al. CHI ’25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems


Abstract

Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives

As AI systems quickly improve in both breadth and depth of performance, they lend themselves to creating increasingly powerful and realistic agents, including the possibility of agents modeled on specific people.

We anticipate that within our lifetimes it may become common practice for people to create custom AI agents to interact with loved ones and/or the broader world after death; indeed, the past year has seen a boom in startups purporting to offer such services.

We call these generative ghosts since such agents will be capable of generating novel content rather than merely parroting content produced by their creator while living.

In this paper, we reflect on the history of technologies for AI afterlives, including current early attempts by individual enthusiasts and startup companies to create generative ghosts.

We then introduce a novel design space detailing potential implementations of generative ghosts.

We use this analytic framework to ground a discussion of the practical and ethical implications of various approaches to designing generative ghosts, including potential positive and negative impacts on individuals and society.

Based on these considerations, we lay out a research agenda for the AI and HCI research communities to better understand the risk/benefit landscape of this novel technology to ultimately empower people who wish to create and interact with AI afterlives to do so in a beneficial manner.