top of page
Artemis II Returns From the Moon as Old Conspiracies Find New Life Online

Artemis II Returns From the Moon as Old Conspiracies Find New Life Online

9 April 2026

Paul Francis

Want your article or story on our site? Contact us here

A Mission in Motion, Not Preparation


Artemis II is no longer a promise or a plan. It is a live, unfolding mission.


Having successfully travelled beyond low Earth orbit and looped around the Moon, the crew are now on their return journey to Earth. In doing so, they have already secured their place in history as the first humans in more than half a century to venture into deep space. The mission itself has been widely followed, not just through official NASA channels but across social media, where images, clips and astronaut updates have circulated in near real time.


Among the most striking moments so far have been the views of Earth from lunar distance. These are not abstract renderings or archival references. They are current, high-resolution visuals captured by a crew physically present in deep space. For many, it has been a powerful reminder of both scale and perspective, reinforcing the reality of human spaceflight beyond Earth orbit.


Yet as these images spread, something else has travelled with them.


Earthrise over the Moon's horizon, showing Earth partially lit against the blackness of space. The Moon's surface is grey and textured.

The Return of a Familiar Narrative

Alongside the excitement and global attention, Flat Earth narratives have begun to reappear with renewed visibility. As with previous milestones in space exploration, the mission has acted as a catalyst rather than a cause.


Footage from Artemis II, particularly anything showing Earth as a curved, distant sphere, has been picked apart across various platforms. Claims of digital manipulation, lens distortion and staged environments have resurfaced, often attached to short clips or isolated frames removed from their original context.


This is not evidence of a growing movement in terms of numbers. It is, however, a clear increase in visibility. The scale of Artemis II has pulled these conversations back into mainstream timelines, where they sit alongside genuine public interest and scientific engagement.


Real-Time Content, Real-Time Reaction

What distinguishes Artemis II from earlier missions is the immediacy of its coverage. This is not a mission filtered through delayed broadcasts or carefully edited highlights. It is being experienced as it happens.


That immediacy has a double edge. On one hand, it allows for unprecedented access and transparency. On the other, it provides a constant stream of material that can be reinterpreted, clipped and redistributed without context.


A reflection in a window, a momentary visual artefact in a video feed, or even the way lighting behaves inside the spacecraft can quickly be reframed as suspicious. Once those clips are detached from their technical explanations, they take on a life of their own within certain online communities.


The speed at which this happens is key. Reaction no longer follows the event. It unfolds alongside it.


Scepticism in the Age of Algorithms

Flat Earth content does not exist in isolation. It is sustained by a broader culture of scepticism towards institutions, particularly those associated with government and large-scale scientific endeavour.


NASA, as both a symbol of authority and a source of complex, hard-to-verify information, naturally becomes a focal point. Artemis II, with its deep space trajectory and high visibility, fits neatly into that framework.


Social media platforms then amplify the effect. Content that challenges, contradicts or provokes tends to perform well, regardless of its factual basis. As a result, posts questioning the mission often gain traction not because they are persuasive, but because they are engaging.


This creates a distorted sense of scale. What is, in reality, a fringe viewpoint can appear far more prominent than it actually is.


The Broader Public Perspective

Outside of these pockets of scepticism, the response to Artemis II has been largely one of fascination and admiration. The mission has reignited interest in human spaceflight, particularly among audiences who have never experienced a live crewed journey beyond Earth orbit.


There is also a noticeable difference in tone compared to previous eras. The Apollo missions were moments of collective attention, where a single narrative dominated public consciousness. Artemis II exists in a far more fragmented environment, where multiple conversations unfold simultaneously.


In that landscape, it is entirely possible for celebration, curiosity and conspiracy to coexist without directly intersecting.


A Reflection of the Modern Media Landscape

The re-emergence of Flat Earth narratives during Artemis II is not an anomaly. It is part of a broader pattern that defines how major events are now experienced.


Every significant moment generates its own parallel discourse. One is grounded in reality, driven by science, engineering and exploration. The other is shaped by interpretation, scepticism and the mechanics of online engagement.


Artemis II, currently making its way back to Earth, sits at the centre of both.

The mission itself is a clear demonstration of human capability and technological progress. The conversation around it, however, reveals something different. It highlights how information is processed, challenged and reshaped in real time.


In that sense, Artemis II is not just a journey through space. It is a case study in how modern audiences navigate truth, trust and visibility in an increasingly complex digital world.

Current Most Read

Artemis II Returns From the Moon as Old Conspiracies Find New Life Online
Streamlining Small Business Operations for Maximum Efficiency
Posts Are Down, But Scrolling Isn’t: Are We Watching More and Sharing Less on Social Media?

Navigating AI Uncertainties: Examining Fears Surrounding ChatGPT

  • Writer: Diane Hall
    Diane Hall
  • Sep 25, 2023
  • 5 min read

Ai Brain graphic concept

Me to ChatGPT: Please write an 800-word article on the fears some people have around ChatGPT and what its incorporation into content creation may mean


ChatGPT: Navigating the Uncertainties: Understanding the Fears Surrounding ChatGPT and Its Impact on Content Creation


Introduction

In the realm of artificial intelligence, few innovations have captured the public's imagination quite like ChatGPT. Developed by OpenAI, ChatGPT is a powerful language model capable of generating human-like text responses to a wide range of prompts and queries. While this technology represents a remarkable leap forward in the field of AI, it has also sparked fears and concerns among some people. In this article, we will explore the apprehensions surrounding ChatGPT and what its incorporation into content creation may mean for the future.


Fear of Misinformation

One of the primary fears surrounding ChatGPT is its potential to generate and spread misinformation. As ChatGPT can produce text that sounds convincingly human, there is a risk that it may be used to craft false narratives, fake news, or misleading content. This fear is not unfounded, as we have already witnessed instances where AI-generated text has been used to fabricate stories or manipulate public opinion.

To mitigate this risk, it is essential for developers and users of ChatGPT to exercise responsibility and ethical restraint. Fact-checking and verifying information generated by AI tools should become the norm. Additionally, ongoing research and improvements in AI should focus on reducing the model's propensity to produce misleading or false information.


AI robot hand touching and working on keyboard

Job Displacement Concerns

The incorporation of ChatGPT and similar AI technologies into content creation also raises concerns about job displacement. Many writers, journalists, and content creators worry that the widespread adoption of AI-generated content may lead to a decline in demand for human writers, ultimately resulting in job losses.


While it is true that AI can automate certain aspects of content creation, such as generating product descriptions or news summaries, it is unlikely to replace the creative and nuanced work of human writers entirely. Instead, AI can complement human creativity by handling repetitive tasks, freeing up writers to focus on higher-level, creative aspects of their work. Content creators should view AI as a tool to enhance their productivity rather than as a threat to their livelihoods.


Bias and Ethical Concerns

Another prevalent fear associated with AI models like ChatGPT is the perpetuation of bias and ethical concerns. AI models learn from the data they are trained on, and if this data contains biases or discriminatory patterns, the model may reproduce these biases in its responses. This has the potential to reinforce stereotypes, discrimination, and inequality in content generated by ChatGPT.


To address this issue, developers must prioritize bias mitigation and ethical considerations in the training and deployment of AI models. Robust testing, ongoing monitoring, and transparency in the development process can help ensure that AI-generated content is free from bias and adheres to ethical guidelines. Furthermore, user education and awareness can play a crucial role in recognizing and addressing bias when it arises.

Loss of Human Touch


Another fear some people have is that AI-generated content lacks the human touch and authenticity that is essential in various forms of communication. Whether it's a heartfelt letter, a piece of art, or a story, the human element brings depth, emotion, and connection to these creations. There is a concern that relying too heavily on AI-generated content may lead to a loss of this essential human touch in our interactions.


While AI can mimic human language and generate text, it cannot replicate the genuine emotions, experiences, and unique perspectives that make human creations special. To preserve the human touch, it is crucial to strike a balance between AI automation and human creativity. AI should be seen as a tool that enhances efficiency and creativity rather than a complete replacement for the human element.


AI logo on a baby blue background with circuitry surrounding it.

Privacy and Data Security

Privacy and data security are significant concerns when it comes to AI-powered technologies like ChatGPT. Users worry that their personal data and conversations may be harvested and exploited for various purposes, including targeted advertising or surveillance. The fear of AI models mining private conversations and sensitive information is a valid one.

To alleviate these concerns, AI developers and organizations must prioritize user privacy and data protection. Implementing robust data encryption, ensuring clear user consent, and adhering to strict data handling practices can help build trust in AI technologies. Regulatory frameworks and guidelines must also evolve to keep pace with the advancements in AI and safeguard user data.


Conclusion

While ChatGPT and similar AI models hold great promise for content creation and communication, it is essential to acknowledge and address the legitimate fears and concerns that surround their use. Misinformation, job displacement, bias, loss of the human touch, and privacy issues are valid worries that require careful consideration and proactive measures.


The responsible development and deployment of AI technologies like ChatGPT are critical to ensure that they benefit society without causing harm. By prioritizing transparency, ethical guidelines, and user education, we can harness the power of AI while mitigating its potential risks. Ultimately, it is our collective responsibility to navigate the uncertainties and shape a future where AI augments human creativity and communication, rather than replacing it.


Me: Now, that’s not a bad piece of content. I don’t like the way some phrases are worded, but they’re easy enough to change as I read through and even in its raw state, it does the job.

According to a plagiarism checker, however, this generated response has ripped off 27% of its wording. To be fair, I think that’s the highest ever result I’ve seen using this tool; most articles I ask ChatGPT to create fall below the threshold for acceptable/natural plagiarism (15%). Regardless, this is easy to rectify; the tool highlights the potentially plagiarised sections for me, and it wouldn’t take me longer than a couple of minutes to reword those sentences to reduce this percentage.


The time it took me to generate the response was less than a minute. Even with my commitment to a good edit and tackling that higher-than-normal plagiarism, a completely acceptable and easily readable article will have taken me 15 minutes to produce.

Before ChatGPT, a similar article would have taken me around an hour to write, then a further half-an-hour to tweak, and that may not have included the time taken to come up with and research my article’s angle.


Given the response above is in its raw form, you may think that it’s robotic and dry to read. To edit it into a readable piece really doesn’t take long—and it mainly concerns the beginning and ending of the article.


The upshot is that my productivity has more than doubled since CharGPT came along (tripled, maybe).


I really can’t understand why some people haven’t used it, as it has so much potential. Providing a first draft like that above is only a small part of what it can do.


bottom of page