top of page
Artemis II Returns From the Moon as Old Conspiracies Find New Life Online

Artemis II Returns From the Moon as Old Conspiracies Find New Life Online

9 April 2026

Paul Francis

Want your article or story on our site? Contact us here

A Mission in Motion, Not Preparation


Artemis II is no longer a promise or a plan. It is a live, unfolding mission.


Having successfully travelled beyond low Earth orbit and looped around the Moon, the crew are now on their return journey to Earth. In doing so, they have already secured their place in history as the first humans in more than half a century to venture into deep space. The mission itself has been widely followed, not just through official NASA channels but across social media, where images, clips and astronaut updates have circulated in near real time.


Among the most striking moments so far have been the views of Earth from lunar distance. These are not abstract renderings or archival references. They are current, high-resolution visuals captured by a crew physically present in deep space. For many, it has been a powerful reminder of both scale and perspective, reinforcing the reality of human spaceflight beyond Earth orbit.


Yet as these images spread, something else has travelled with them.


Earthrise over the Moon's horizon, showing Earth partially lit against the blackness of space. The Moon's surface is grey and textured.

The Return of a Familiar Narrative

Alongside the excitement and global attention, Flat Earth narratives have begun to reappear with renewed visibility. As with previous milestones in space exploration, the mission has acted as a catalyst rather than a cause.


Footage from Artemis II, particularly anything showing Earth as a curved, distant sphere, has been picked apart across various platforms. Claims of digital manipulation, lens distortion and staged environments have resurfaced, often attached to short clips or isolated frames removed from their original context.


This is not evidence of a growing movement in terms of numbers. It is, however, a clear increase in visibility. The scale of Artemis II has pulled these conversations back into mainstream timelines, where they sit alongside genuine public interest and scientific engagement.


Real-Time Content, Real-Time Reaction

What distinguishes Artemis II from earlier missions is the immediacy of its coverage. This is not a mission filtered through delayed broadcasts or carefully edited highlights. It is being experienced as it happens.


That immediacy has a double edge. On one hand, it allows for unprecedented access and transparency. On the other, it provides a constant stream of material that can be reinterpreted, clipped and redistributed without context.


A reflection in a window, a momentary visual artefact in a video feed, or even the way lighting behaves inside the spacecraft can quickly be reframed as suspicious. Once those clips are detached from their technical explanations, they take on a life of their own within certain online communities.


The speed at which this happens is key. Reaction no longer follows the event. It unfolds alongside it.


Scepticism in the Age of Algorithms

Flat Earth content does not exist in isolation. It is sustained by a broader culture of scepticism towards institutions, particularly those associated with government and large-scale scientific endeavour.


NASA, as both a symbol of authority and a source of complex, hard-to-verify information, naturally becomes a focal point. Artemis II, with its deep space trajectory and high visibility, fits neatly into that framework.


Social media platforms then amplify the effect. Content that challenges, contradicts or provokes tends to perform well, regardless of its factual basis. As a result, posts questioning the mission often gain traction not because they are persuasive, but because they are engaging.


This creates a distorted sense of scale. What is, in reality, a fringe viewpoint can appear far more prominent than it actually is.


The Broader Public Perspective

Outside of these pockets of scepticism, the response to Artemis II has been largely one of fascination and admiration. The mission has reignited interest in human spaceflight, particularly among audiences who have never experienced a live crewed journey beyond Earth orbit.


There is also a noticeable difference in tone compared to previous eras. The Apollo missions were moments of collective attention, where a single narrative dominated public consciousness. Artemis II exists in a far more fragmented environment, where multiple conversations unfold simultaneously.


In that landscape, it is entirely possible for celebration, curiosity and conspiracy to coexist without directly intersecting.


A Reflection of the Modern Media Landscape

The re-emergence of Flat Earth narratives during Artemis II is not an anomaly. It is part of a broader pattern that defines how major events are now experienced.


Every significant moment generates its own parallel discourse. One is grounded in reality, driven by science, engineering and exploration. The other is shaped by interpretation, scepticism and the mechanics of online engagement.


Artemis II, currently making its way back to Earth, sits at the centre of both.

The mission itself is a clear demonstration of human capability and technological progress. The conversation around it, however, reveals something different. It highlights how information is processed, challenged and reshaped in real time.


In that sense, Artemis II is not just a journey through space. It is a case study in how modern audiences navigate truth, trust and visibility in an increasingly complex digital world.

Current Most Read

Artemis II Returns From the Moon as Old Conspiracies Find New Life Online
Streamlining Small Business Operations for Maximum Efficiency
Posts Are Down, But Scrolling Isn’t: Are We Watching More and Sharing Less on Social Media?

The Ghost in the Machine: When AI Mimics the Dead

  • Writer: Paul Francis
    Paul Francis
  • Oct 7, 2025
  • 4 min read

Artificial intelligence is increasingly being used to recreate the voices, personalities and memories of people who have died. Known as griefbots or deadbots, these digital simulations are part of a growing industry exploring what many call the “digital afterlife”.

Researchers, ethicists and psychologists are now asking whether these technologies help people heal or risk turning grief into a new form of dependency.


Futuristic robot with blue neon lights and headphones stands in a vibrant, neon-lit city street at night, exuding a sci-fi ambiance.

What Are Griefbots?

Griefbots are AI systems trained on the digital footprints of deceased people. They use archived data such as text messages, emails, social media posts, and recordings to generate responses that sound like the individual.


The underlying models are based on large language systems, such as GPT-style architectures, which predict text patterns and simulate conversation. Some companies also add voice cloning and photo or video avatars to enhance realism.


Key Components

  • Data Collection: Messages, posts, audio and video are compiled as “seed data”.

  • Model Training: AI is fine-tuned to reproduce the subject’s tone, phrasing and emotional patterns.

  • Memory Layer: The system can recall previous conversations to simulate continuity.

  • Output: Interaction occurs through chat, speech or, increasingly, virtual avatars.

Unlike human memory, the AI does not truly remember. It produces statistically likely sentences that feel authentic.


A glowing blue robot and people in a cozy living room. Warm lighting, blurred background with a relaxed atmosphere.

The Real Case: The Jessica Simulation

One of the most widely reported examples is the case of Joshua Barbeau, a Canadian man who used an online tool called Project December to recreate his late fiancée, Jessica Pereira.


Barbeau uploaded Jessica’s old text messages and personality descriptions into the system. The chatbot generated responses that closely matched her language and humour. The experiment brought moments of comfort, but also confusion and emotional dissonance.


The story, published by the San Francisco Chronicle, became one of the first detailed accounts of a real person using AI to simulate the dead. It sparked international discussion about digital resurrection and the ethics of “talking to” lost loved ones.


Why Are People Using AI to Reconnect with the Dead?

Psychologists and grief researchers point to several motivations behind the use of griefbots:

  • Closure: People seek the chance to say what they never could.

  • Companionship: Some find comfort in familiar words or voice tones.

  • Curiosity: Others are drawn to test how far technology can replicate personality.

  • Legacy Creation: A growing number of people now train AI replicas of themselves for relatives to interact with after death.


In the UK, interest in digital legacy services has risen sharply since the pandemic. Companies such as HereAfter AI and StoryFile market themselves to families who want to preserve stories, voices and advice for future generations.


Robotic skull with glowing eyes emerges from mossy ground in a moonlit graveyard. Dark, eerie atmosphere with tombstones and bare trees.

Ethical and Psychological Risks

Experts warn that AI resurrection carries emotional and social consequences that are not yet fully understood.


Main Concerns

  1. Distortion of Memory: AI reconstructions may invent or misrepresent facts, reshaping how the deceased is remembered.

  2. Prolonged Grief: Continuous digital communication can delay acceptance or amplify loss.

  3. Consent and Privacy: The dead cannot give permission for data use, raising questions of ownership and dignity.

  4. Commercial Exploitation: Some griefbot platforms charge subscriptions or advertise paid “premium” sessions, effectively monetising mourning.

  5. Unwanted Contact: Cambridge researchers have warned that unregulated bots might send messages unexpectedly, leading to “unwanted hauntings”.

  6. Cultural and Religious Boundaries: Beliefs about death, remembrance and the afterlife differ globally. In some cultures, simulating a dead person’s voice or face would be taboo.


The University of Cambridge’s Leverhulme Centre for the Future of Intelligence has called for clear regulation on AI memorials, including data consent, access rights and time-limited operation of griefbots.


The Technology Behind AI Resurrection

The most common platforms rely on large language models combined with personalised prompting. Developers use context blocks that describe the deceased’s traits (“You are Jessica, a 23-year-old artist who loves astronomy and dry humour”).


Recent advances include:

  • Neural voice cloning that can reproduce vocal tone from a few seconds of audio.

  • Facial animation models used for interactive video memorials.

  • Memory graphs that store biographical details to maintain conversation continuity.

  • Emotional analytics that adjust the bot’s tone based on the user’s sentiment.


AI companies are also exploring virtual reality integration, allowing users to enter simulated environments to “meet” digital avatars of loved ones.


Regulation and Calls for Oversight

There is currently no dedicated UK or international law governing posthumous AI likenesses. Legal experts say personality and likeness rights usually expire upon death, leaving families or companies to decide how data is used.


The Information Commissioner’s Office (ICO) has indicated that UK data protection rules apply only to the living. However, digital legacies often contain sensitive information about the deceased and their relatives, creating grey areas.


Ethicists have proposed several safeguards:

  • Require explicit consent before or during life for data use in posthumous AI systems.

  • Implement “digital retirement” processes to deactivate griefbots after set periods.

  • Provide transparency statements identifying the AI’s nature at the start of every interaction.

  • Restrict access for minors and vulnerable users.


The Wider “DeathTech” Industry

The use of AI in mourning forms part of the broader DeathTech sector, which includes:

  • Online memorial websites and digital headstones.

  • AI-assisted funeral planning and obituary writing.

  • Virtual reality memorials and livestreamed funerals.

  • Interactive archives allowing descendants to “interview” ancestors.


Analysts estimate that the digital memorialisation industry could exceed £2 billion globally by 2030, with North America, the UK and South Korea leading adoption.


Future Outlook

AI grief technology is likely to expand alongside mainstream adoption of generative models. Future iterations may combine speech, gesture and holographic rendering to produce “living archives”.


Experts suggest society will need new ethical and legal frameworks to define identity, consent and closure in a world where death may no longer mark the end of conversation.

The question remains: will these tools help the living remember — or make it harder to let go?

bottom of page