top of page
The Ghost in the Machine: When AI Mimics the Dead

The Ghost in the Machine: When AI Mimics the Dead

7 October 2025

Paul Francis

Want your article or story on our site? Contact us here

The Myth of Multitasking: Why We’re Worse at It Than We Think
The Ghost in the Machine: When AI Mimics the Dead
The Man Who Swapped Salt for Bromide After Asking ChatGPT

Artificial intelligence is increasingly being used to recreate the voices, personalities and memories of people who have died. Known as griefbots or deadbots, these digital simulations are part of a growing industry exploring what many call the “digital afterlife”.

Researchers, ethicists and psychologists are now asking whether these technologies help people heal or risk turning grief into a new form of dependency.


Futuristic robot with blue neon lights and headphones stands in a vibrant, neon-lit city street at night, exuding a sci-fi ambiance.

What Are Griefbots?

Griefbots are AI systems trained on the digital footprints of deceased people. They use archived data such as text messages, emails, social media posts, and recordings to generate responses that sound like the individual.


The underlying models are based on large language systems, such as GPT-style architectures, which predict text patterns and simulate conversation. Some companies also add voice cloning and photo or video avatars to enhance realism.


Key Components

  • Data Collection: Messages, posts, audio and video are compiled as “seed data”.

  • Model Training: AI is fine-tuned to reproduce the subject’s tone, phrasing and emotional patterns.

  • Memory Layer: The system can recall previous conversations to simulate continuity.

  • Output: Interaction occurs through chat, speech or, increasingly, virtual avatars.

Unlike human memory, the AI does not truly remember. It produces statistically likely sentences that feel authentic.


A glowing blue robot and people in a cozy living room. Warm lighting, blurred background with a relaxed atmosphere.

The Real Case: The Jessica Simulation

One of the most widely reported examples is the case of Joshua Barbeau, a Canadian man who used an online tool called Project December to recreate his late fiancée, Jessica Pereira.


Barbeau uploaded Jessica’s old text messages and personality descriptions into the system. The chatbot generated responses that closely matched her language and humour. The experiment brought moments of comfort, but also confusion and emotional dissonance.


The story, published by the San Francisco Chronicle, became one of the first detailed accounts of a real person using AI to simulate the dead. It sparked international discussion about digital resurrection and the ethics of “talking to” lost loved ones.


Why Are People Using AI to Reconnect with the Dead?

Psychologists and grief researchers point to several motivations behind the use of griefbots:

  • Closure: People seek the chance to say what they never could.

  • Companionship: Some find comfort in familiar words or voice tones.

  • Curiosity: Others are drawn to test how far technology can replicate personality.

  • Legacy Creation: A growing number of people now train AI replicas of themselves for relatives to interact with after death.


In the UK, interest in digital legacy services has risen sharply since the pandemic. Companies such as HereAfter AI and StoryFile market themselves to families who want to preserve stories, voices and advice for future generations.


Robotic skull with glowing eyes emerges from mossy ground in a moonlit graveyard. Dark, eerie atmosphere with tombstones and bare trees.

Ethical and Psychological Risks

Experts warn that AI resurrection carries emotional and social consequences that are not yet fully understood.


Main Concerns

  1. Distortion of Memory: AI reconstructions may invent or misrepresent facts, reshaping how the deceased is remembered.

  2. Prolonged Grief: Continuous digital communication can delay acceptance or amplify loss.

  3. Consent and Privacy: The dead cannot give permission for data use, raising questions of ownership and dignity.

  4. Commercial Exploitation: Some griefbot platforms charge subscriptions or advertise paid “premium” sessions, effectively monetising mourning.

  5. Unwanted Contact: Cambridge researchers have warned that unregulated bots might send messages unexpectedly, leading to “unwanted hauntings”.

  6. Cultural and Religious Boundaries: Beliefs about death, remembrance and the afterlife differ globally. In some cultures, simulating a dead person’s voice or face would be taboo.


The University of Cambridge’s Leverhulme Centre for the Future of Intelligence has called for clear regulation on AI memorials, including data consent, access rights and time-limited operation of griefbots.


The Technology Behind AI Resurrection

The most common platforms rely on large language models combined with personalised prompting. Developers use context blocks that describe the deceased’s traits (“You are Jessica, a 23-year-old artist who loves astronomy and dry humour”).


Recent advances include:

  • Neural voice cloning that can reproduce vocal tone from a few seconds of audio.

  • Facial animation models used for interactive video memorials.

  • Memory graphs that store biographical details to maintain conversation continuity.

  • Emotional analytics that adjust the bot’s tone based on the user’s sentiment.


AI companies are also exploring virtual reality integration, allowing users to enter simulated environments to “meet” digital avatars of loved ones.


Regulation and Calls for Oversight

There is currently no dedicated UK or international law governing posthumous AI likenesses. Legal experts say personality and likeness rights usually expire upon death, leaving families or companies to decide how data is used.


The Information Commissioner’s Office (ICO) has indicated that UK data protection rules apply only to the living. However, digital legacies often contain sensitive information about the deceased and their relatives, creating grey areas.


Ethicists have proposed several safeguards:

  • Require explicit consent before or during life for data use in posthumous AI systems.

  • Implement “digital retirement” processes to deactivate griefbots after set periods.

  • Provide transparency statements identifying the AI’s nature at the start of every interaction.

  • Restrict access for minors and vulnerable users.


The Wider “DeathTech” Industry

The use of AI in mourning forms part of the broader DeathTech sector, which includes:

  • Online memorial websites and digital headstones.

  • AI-assisted funeral planning and obituary writing.

  • Virtual reality memorials and livestreamed funerals.

  • Interactive archives allowing descendants to “interview” ancestors.


Analysts estimate that the digital memorialisation industry could exceed £2 billion globally by 2030, with North America, the UK and South Korea leading adoption.


Future Outlook

AI grief technology is likely to expand alongside mainstream adoption of generative models. Future iterations may combine speech, gesture and holographic rendering to produce “living archives”.


Experts suggest society will need new ethical and legal frameworks to define identity, consent and closure in a world where death may no longer mark the end of conversation.

The question remains: will these tools help the living remember — or make it harder to let go?

Current Most Read

Quantum Motion: Britain’s Bid to Shrink the Quantum Computer
Streaming in the Spotlight: How the Online Safety Act Could Change What We Watch
Drone Dreams and K-Pop Beams: Demon Hunters Take Over the Skies

The Ghost in the Machine: When AI Mimics the Dead

  • Writer: Paul Francis
    Paul Francis
  • 2 days ago
  • 4 min read

Artificial intelligence is increasingly being used to recreate the voices, personalities and memories of people who have died. Known as griefbots or deadbots, these digital simulations are part of a growing industry exploring what many call the “digital afterlife”.

Researchers, ethicists and psychologists are now asking whether these technologies help people heal or risk turning grief into a new form of dependency.


Futuristic robot with blue neon lights and headphones stands in a vibrant, neon-lit city street at night, exuding a sci-fi ambiance.

What Are Griefbots?

Griefbots are AI systems trained on the digital footprints of deceased people. They use archived data such as text messages, emails, social media posts, and recordings to generate responses that sound like the individual.


The underlying models are based on large language systems, such as GPT-style architectures, which predict text patterns and simulate conversation. Some companies also add voice cloning and photo or video avatars to enhance realism.


Key Components

  • Data Collection: Messages, posts, audio and video are compiled as “seed data”.

  • Model Training: AI is fine-tuned to reproduce the subject’s tone, phrasing and emotional patterns.

  • Memory Layer: The system can recall previous conversations to simulate continuity.

  • Output: Interaction occurs through chat, speech or, increasingly, virtual avatars.

Unlike human memory, the AI does not truly remember. It produces statistically likely sentences that feel authentic.


A glowing blue robot and people in a cozy living room. Warm lighting, blurred background with a relaxed atmosphere.

The Real Case: The Jessica Simulation

One of the most widely reported examples is the case of Joshua Barbeau, a Canadian man who used an online tool called Project December to recreate his late fiancée, Jessica Pereira.


Barbeau uploaded Jessica’s old text messages and personality descriptions into the system. The chatbot generated responses that closely matched her language and humour. The experiment brought moments of comfort, but also confusion and emotional dissonance.


The story, published by the San Francisco Chronicle, became one of the first detailed accounts of a real person using AI to simulate the dead. It sparked international discussion about digital resurrection and the ethics of “talking to” lost loved ones.


Why Are People Using AI to Reconnect with the Dead?

Psychologists and grief researchers point to several motivations behind the use of griefbots:

  • Closure: People seek the chance to say what they never could.

  • Companionship: Some find comfort in familiar words or voice tones.

  • Curiosity: Others are drawn to test how far technology can replicate personality.

  • Legacy Creation: A growing number of people now train AI replicas of themselves for relatives to interact with after death.


In the UK, interest in digital legacy services has risen sharply since the pandemic. Companies such as HereAfter AI and StoryFile market themselves to families who want to preserve stories, voices and advice for future generations.


Robotic skull with glowing eyes emerges from mossy ground in a moonlit graveyard. Dark, eerie atmosphere with tombstones and bare trees.

Ethical and Psychological Risks

Experts warn that AI resurrection carries emotional and social consequences that are not yet fully understood.


Main Concerns

  1. Distortion of Memory: AI reconstructions may invent or misrepresent facts, reshaping how the deceased is remembered.

  2. Prolonged Grief: Continuous digital communication can delay acceptance or amplify loss.

  3. Consent and Privacy: The dead cannot give permission for data use, raising questions of ownership and dignity.

  4. Commercial Exploitation: Some griefbot platforms charge subscriptions or advertise paid “premium” sessions, effectively monetising mourning.

  5. Unwanted Contact: Cambridge researchers have warned that unregulated bots might send messages unexpectedly, leading to “unwanted hauntings”.

  6. Cultural and Religious Boundaries: Beliefs about death, remembrance and the afterlife differ globally. In some cultures, simulating a dead person’s voice or face would be taboo.


The University of Cambridge’s Leverhulme Centre for the Future of Intelligence has called for clear regulation on AI memorials, including data consent, access rights and time-limited operation of griefbots.


The Technology Behind AI Resurrection

The most common platforms rely on large language models combined with personalised prompting. Developers use context blocks that describe the deceased’s traits (“You are Jessica, a 23-year-old artist who loves astronomy and dry humour”).


Recent advances include:

  • Neural voice cloning that can reproduce vocal tone from a few seconds of audio.

  • Facial animation models used for interactive video memorials.

  • Memory graphs that store biographical details to maintain conversation continuity.

  • Emotional analytics that adjust the bot’s tone based on the user’s sentiment.


AI companies are also exploring virtual reality integration, allowing users to enter simulated environments to “meet” digital avatars of loved ones.


Regulation and Calls for Oversight

There is currently no dedicated UK or international law governing posthumous AI likenesses. Legal experts say personality and likeness rights usually expire upon death, leaving families or companies to decide how data is used.


The Information Commissioner’s Office (ICO) has indicated that UK data protection rules apply only to the living. However, digital legacies often contain sensitive information about the deceased and their relatives, creating grey areas.


Ethicists have proposed several safeguards:

  • Require explicit consent before or during life for data use in posthumous AI systems.

  • Implement “digital retirement” processes to deactivate griefbots after set periods.

  • Provide transparency statements identifying the AI’s nature at the start of every interaction.

  • Restrict access for minors and vulnerable users.


The Wider “DeathTech” Industry

The use of AI in mourning forms part of the broader DeathTech sector, which includes:

  • Online memorial websites and digital headstones.

  • AI-assisted funeral planning and obituary writing.

  • Virtual reality memorials and livestreamed funerals.

  • Interactive archives allowing descendants to “interview” ancestors.


Analysts estimate that the digital memorialisation industry could exceed £2 billion globally by 2030, with North America, the UK and South Korea leading adoption.


Future Outlook

AI grief technology is likely to expand alongside mainstream adoption of generative models. Future iterations may combine speech, gesture and holographic rendering to produce “living archives”.


Experts suggest society will need new ethical and legal frameworks to define identity, consent and closure in a world where death may no longer mark the end of conversation.

The question remains: will these tools help the living remember — or make it harder to let go?

bottom of page