top of page

Current Most Read

Have We Become Too Reliant on AI?
Nintendo Switch 2 Launches to Record Sales, Mixed Reviews, and Market Shifts
Tensions in Los Angeles as Protests Continue Over ICE Raids

More Mascots in Advertising

  • Writer: Paul Francis
    Paul Francis
  • May 29, 2024
  • 2 min read

Recently, I wrote an article that talked about some forgotten mascots, as well as a very bizarre one (depending on which country you’re from). That article can be found here.


Today, I want to highlight some characters you might remember, which were created by one company only to be remembered for another, as well as some that were once dropped, but which were revived almost instantly.


Flash in the pan


Flat Eric


Originally a puppet created by Quentin Dupieux (Mr. Oizo), called Stephane, Flat Eric was redesigned by Janet Knechtel while she worked for Jim Henson’s Creature Shop. It was most famous for a small series of adverts for Levi Jeans in 1999. The puppet is still used by the original creator, Mr. Oizo.



Bouncing between brands


Monkey


Sometimes called the PG Tips’ Monkey, this slightly cowardly little scamp appeared originally with ‘Al’ (played by Johnny Vegas), on the now-defunct ITV Digital. Because Monkey was owned by advertising agency Mother, and not ITV Digital, it was allowed to appear in other programmes and media and played a part in 2001’s Comic Relief and 2002’s BRIT awards. It wasn’t until 2007 that Monkey appeared in adverts for PG Tips and was rebranded as ‘The PG Tips’ Monkey’.



Lose us, lose profit


Tetley Tea Folk


The Tetley Tea folk were originally created by Canadian copywriter John McGill Lewis, with help from Peter Rigby and Wyatt Cattaneo Studios, back in 1973. Tetley’s fictional tea team grew substantially over the years, with several characters being added over time. Tetley put the Tea Folk on lots of their products and associated merchandise, with some becoming highly collectable items, selling for as much as £200 on second-hand markets.


They coined three catchphrases during their run, with ‘That’s Better, That’s Tetley’ being one of the most famous.


They were retired by Tetley in 2001 as the brand concentrated on a younger market with a more modern advertising campaign. By July 2002, however, the company’s sales had slumped by 14%, which they stated was down to the axing of the Tetley Tea Folk. As a result, they made a comeback in 2010 and have appeared on and off TV for the brand ever since.


Mascots can make a brand come alive; however, as shown above, they can sometimes become bigger than the brand they’re representing.

Have We Become Too Reliant on AI?

Have We Become Too Reliant on AI?

13 June 2025

Paul Francis

Want your article or story on our site? Contact us here

The ongoing unrest in Los Angeles has escalated, with President Donald Trump deploying the National Guard and Marines in an attempt to clamp down on protests. This move has drawn criticism, particularly after images surfaced showing Guardsmen sleeping on cold floors in public buildings—images that quickly sparked outrage. But this article isn’t really about that. Well, not directly.


What’s more concerning is what happened next.

As these images began circulating online, a troubling trend emerged. People started questioning their authenticity, not based on verified information or investigative journalism, but on what artificial intelligence told them. Accusations of “fake news”, “AI-generated images”, or “doctored photos” spread rapidly. Rather than consulting reputable sources, many turned to AI tools to determine what was real.


And they trusted the answers without hesitation.


These AI models, often perceived as neutral, trustworthy, and authoritative, told users that although the images were real, they weren’t recent. According to the models, the photos dated back to 2021 and were taken overseas. The implication? They had nothing to do with the situation unfolding in Los Angeles.


People believed it. Anyone suggesting otherwise was dismissed as misinformed or biased. The idea that these images were being used to fuel an anti-Trump agenda gained traction, all because an algorithm said so.


But there’s one major flaw: the AI was wrong.


These images didn’t exist online before June 2025. They aren’t from 2021. They weren’t taken abroad. They are, in fact, current and accurate, just as the original reports stated. But because AI tools misidentified them, many dismissed the truth. This isn’t just a harmless mistake; it’s a serious issue.

We are placing too much trust in machines that cannot offer certainty. These tools don’t rely on real-time data or fact-checking methods; they generate responses based on probabilities and patterns in the data they’ve been trained on. And when those outputs are flawed, people can be dangerously misled.


So what happens when more and more people begin to trust AI over journalists, subject matter experts, or even their own eyes?


We risk entering a reality where truth is no longer defined by facts, but by algorithms—where something can be deemed false not because it lacks evidence, but because a machine didn’t recognise it. If we reach that point, how do we challenge power? How do we uphold accountability? How do we know what’s real?


AI is a remarkable tool. But it is just that—a tool. And when tools are treated as infallible, the consequences can be far-reaching. If we blindly trust AI to define our reality, we may find ourselves living in a world where facts are optional, and truth becomes whatever the machine decides it is.

bottom of page