Have We Become Too Reliant on AI?
- Paul Francis
- Jun 13
- 2 min read
The ongoing unrest in Los Angeles has escalated, with President Donald Trump deploying the National Guard and Marines in an attempt to clamp down on protests. This move has drawn criticism, particularly after images surfaced showing Guardsmen sleeping on cold floors in public buildings—images that quickly sparked outrage. But this article isn’t really about that. Well, not directly.
What’s more concerning is what happened next.
As these images began circulating online, a troubling trend emerged. People started questioning their authenticity, not based on verified information or investigative journalism, but on what artificial intelligence told them. Accusations of “fake news”, “AI-generated images”, or “doctored photos” spread rapidly. Rather than consulting reputable sources, many turned to AI tools to determine what was real.
And they trusted the answers without hesitation.
These AI models, often perceived as neutral, trustworthy, and authoritative, told users that although the images were real, they weren’t recent. According to the models, the photos dated back to 2021 and were taken overseas. The implication? They had nothing to do with the situation unfolding in Los Angeles.
People believed it. Anyone suggesting otherwise was dismissed as misinformed or biased. The idea that these images were being used to fuel an anti-Trump agenda gained traction, all because an algorithm said so.
But there’s one major flaw: the AI was wrong.
These images didn’t exist online before June 2025. They aren’t from 2021. They weren’t taken abroad. They are, in fact, current and accurate, just as the original reports stated. But because AI tools misidentified them, many dismissed the truth. This isn’t just a harmless mistake; it’s a serious issue.
We are placing too much trust in machines that cannot offer certainty. These tools don’t rely on real-time data or fact-checking methods; they generate responses based on probabilities and patterns in the data they’ve been trained on. And when those outputs are flawed, people can be dangerously misled.
So what happens when more and more people begin to trust AI over journalists, subject matter experts, or even their own eyes?
We risk entering a reality where truth is no longer defined by facts, but by algorithms—where something can be deemed false not because it lacks evidence, but because a machine didn’t recognise it. If we reach that point, how do we challenge power? How do we uphold accountability? How do we know what’s real?
AI is a remarkable tool. But it is just that—a tool. And when tools are treated as infallible, the consequences can be far-reaching. If we blindly trust AI to define our reality, we may find ourselves living in a world where facts are optional, and truth becomes whatever the machine decides it is.