top of page
The New Age of Digital Danger: Why Cybersecurity Fears Are Rising Across the UK

The New Age of Digital Danger: Why Cybersecurity Fears Are Rising Across the UK

3 December 2025

Paul Francis

Want your article or story on our site? Contact us here

Cybercrime in the UK has entered a new phase. Once dominated by obvious phishing emails and fake phone calls, online fraud has evolved into a sophisticated ecosystem powered by artificial intelligence, deepfake video, cloned voices and social media adverts that look almost identical to legitimate campaigns. The result is a surge in public concern, with recent research showing that British consumers feel more vulnerable to digital threats today than at any point in the last decade.


A person wearing headphones works on a computer in a dark room. Code is displayed on two monitors, creating a focused mood.

A new survey by Mastercard reveals that nearly three quarters of UK respondents are now more worried about cybersecurity than they were two years ago. This growing anxiety reflects a shift in the digital environment, where fraudsters are no longer amateurs sending poorly written emails, but coordinated groups using commercial-grade technology and advertising platforms to target victims at unprecedented scale.


This article looks at why concerns are rising, who is being targeted, and how AI, fake adverts and social media platforms have become central to modern scams.


The Surge in Cybersecurity Fear

The 2025 Mastercard study paints a clear picture of a public increasingly anxious about online safety. According to their findings:

  • 74 percent of UK respondents feel more concerned about cybersecurity today than two years ago.

  • More than half of Millennials and Gen Z have discussed cybersecurity with friends or family recently, suggesting a sharp rise in everyday awareness.

  • Many participants believe AI will make it harder to distinguish genuine online content from fraudulent material.


This rise in concern is not misplaced. Cybercriminals now use tools that can generate realistic imagery, video and audio at scale, helping scams spread faster and become more convincing. As the technology becomes cheaper and easier to use, the number of attacks grows.


AI and Deepfake Scams Enter the Mainstream

In the last 18 months, the UK has seen a wave of high profile cases that highlight how AI is transforming online crime.


The Arup Deepfake Fraud

In early 2025, engineering and design firm Arup suffered a loss of more than twenty million pounds after an employee was tricked by an AI-generated video call impersonating company leadership. The scammers used deepfake technology to mimic real executives, convincing staff to authorise a major transfer.


This case became a global warning that deepfake scams are no longer theoretical. They can deceive trained professionals inside major organisations.


Deepfake Celebrity Adverts

Fraudsters are now using AI-generated adverts featuring well known public figures to promote fake investment schemes. In the UK, Martin Lewis was again used without permission in a deepfake crypto scam. Dozens of people believed the video was genuine and lost money.


These adverts often appear on social platforms, where they look polished enough to pass as legitimate marketing campaigns.


Voice Cloning Scams

Surveys show that one in four UK consumers has now received a scam call that appears to use AI-generated or cloned voices. These calls often claim to be from banks, government bodies or service providers. The realism of synthetic voices makes them far more convincing than traditional scam calls.


These developments explain why public anxiety is rising. The threat has become harder to detect using traditional “trust your instincts” advice.


Why Millennials Are Becoming Prime Targets

Historically, older adults were considered the most vulnerable to online fraud. In 2025, the trend has shifted. Fraudsters increasingly target Millennials and younger adults because:

  • they spend more time on social platforms where scam adverts run

  • they trust online shopping and digital adverts more readily

  • they often respond quicker to promotional content

  • impersonation scams can exploit their familiarity with video-first platforms like Instagram, TikTok and Snapchat


Mastercard’s research also suggests that younger adults talk more frequently about cybersecurity because they feel more exposed to digital risk.


Social Media Platforms and Their Role in Scam Adverts

Few factors have alarmed cybersecurity experts more than recent revelations about Meta, the parent company of Facebook and Instagram.


A 2025 Reuters investigation revealed:

  • Meta’s internal estimates suggested it earned around 10 percent of its 2024 revenue, roughly sixteen billion US dollars, from fraudulent or banned-goods adverts.

  • Users across Meta’s platforms were exposed to as many as 15 billion higher risk scam adverts every day, according to leaked documents.

  • Regulators in the United States are now calling for formal investigations into how these adverts spread so widely.


These findings do not mean Meta actively encourages scams, but they highlight a fundamental challenge: the more advert revenue a platform earns from fraudulent activity, the harder it becomes to eliminate it without impacting profit.


For UK consumers, this means a significant number of fraudulent adverts are being delivered directly through feeds and Stories on social apps that most people use daily.


The UK Landscape: Why the Fear Is Justified

Cybercrime in Britain has grown sharply in the past two years. The increase is fuelled by several converging trends:

  • AI tools that generate realistic human voices, faces and videos

  • cheap access to software designed to spoof legitimate websites

  • social platforms overloaded with unregulated third-party adverts

  • wider use of online shopping where ghost stores can appear overnight

  • criminals using mass automation to target thousands of people at once


UK regulators have issued repeated warnings about Christmas shopping scams, investment fraud, fake celebrity endorsements and misleading adverts. Consumers who believe they are digitally literate can still fall victim because the scams look almost identical to genuine content.


Why This Matters for Everyday Users

The rise of AI-enabled fraud directly affects British consumers in three ways:


1. Scams are more believable

A deepfake video, an AI-generated image, or a cloned voice gives scammers the power to impersonate anyone from a family member to a public figure.


2. Scams are more widespread

Automation lets scammers target thousands of people simultaneously across platforms, emails and messaging apps.


3. Scams are more profitable

With billions of adverts circulating on social media, fraudulent campaigns can run for days before being removed, generating significant revenue for criminals.


The average person may not even realise they have been targeted, because exposure is now part of normal online browsing.


The rapid rise of AI in everyday technology is reshaping the cybersecurity threat landscape in the UK. Deepfake video calls, fake celebrity adverts, ghost stores and voice cloning are no longer unusual. They are now part of the toolkit used by modern fraudsters.


The Mastercard survey shows that public anxiety is rising, and the evidence suggests that this concern is justified. If scammers can reach millions of users through adverts on major platforms, and if AI tools can replicate human behaviour with high accuracy, then consumers need stronger protections and better awareness.


The challenge ahead is significant. As AI continues to improve, the boundary between real and fake content will blur even further. What matters now is understanding the risk and building the skills, safeguards and regulations necessary to counter it.

Current Most Read

The New Age of Digital Danger: Why Cybersecurity Fears Are Rising Across the UK
Navigating Career Security and Sustained Growth in an AI-Shaped Landscape
The 2025 Autumn Budget: What It Means for Businesses, Workers and Everyday Households

Raja Jackson: Wrestling Dreams Derailed After Assault Allegations

  • Writer: Paul Francis
    Paul Francis
  • Aug 27
  • 3 min read

The son of MMA legend Quinton “Rampage” Jackson has found himself at the centre of a storm after an independent wrestling match in Los Angeles turned violent. Raja Jackson, a trainee wrestler, has been accused of assaulting an opponent after a scripted move went wrong, leaving fans, promoters and even his own father facing difficult questions about his future.


Raja Jackson pins opponent in the ring. Referee watches. Crowd in the background. Action is intense and dynamic, with vibrant lighting.

What Happened in the Ring

During a recent Knokx Pro Wrestling event in California, Raja was booked in a standard exhibition match against local performer Stuart “Syko Stu” Smith. What began as a routine bout allegedly turned dangerous when Raja delivered repeated blows to his downed opponent, continuing well after the scripted finish. Eyewitnesses described it as a chilling moment where the staged performance gave way to something far more real.


Smith was reportedly left bloodied and unconscious, requiring medical treatment. A GoFundMe page has since been launched to cover his hospital costs. The incident was severe enough that Knokx Pro immediately suspended Raja and confirmed he would no longer appear in their shows. The Los Angeles Police Department has also confirmed an investigation into possible assault charges.


Who is Raja Jackson?

Raja, in his early twenties, is the eldest son of Quinton “Rampage” Jackson, one of the UFC’s most colourful champions during the 2000s. While his father became famous in the Octagon for his power slams and knockout punches, Raja pursued a different path, entering the world of professional wrestling rather than mixed martial arts.


Training at Knokx Pro Wrestling Academy, which is closely tied to WWE Hall of Famer Rikishi and the Anoa’i wrestling family, Raja was seen as a young talent with potential. Until this incident, he had no public history of violence or criminal behaviour. Within the wrestling community, however, some described him as brash and eager to prove himself.


Rampage’s Remark About Bail Money

Attention has also turned to comments Rampage Jackson made in an interview several years ago. Speaking candidly about his children, Rampage joked that he had saved money for two of his sons to go to college, while setting aside money for bail for his third. The remark was made in a light-hearted tone at the time, but fans have since speculated whether he was referring to Raja and whether that comment reflected a deeper concern about his temperament.


While it may have been nothing more than a joke, the resurfacing of that quote has added fuel to debates over whether Raja had shown warning signs of volatility before stepping into the ring.


Retired Pro Wrestler Stevie Richards Breaks Down What Has Happened

Why This Crossed the Line in Wrestling

Professional wrestling is unique in that it blurs the lines between performance and sport. Matches are choreographed, and opponents work together to create the illusion of combat without causing real harm. This cooperative aspect is considered sacred in the industry.

When a wrestler breaks from the script and intentionally hurts their opponent, it is known as a “shoot.” A scripted, staged performance is referred to as a “work.” While works are the foundation of the business, shoots are seen as unprofessional and dangerous, violating the trust between performers.


What happened in Raja’s match is being widely regarded as a shoot, and one that placed his opponent’s health in jeopardy. For that reason, industry insiders have been quick to condemn his actions, stressing that pro wrestling has no place for unsanctioned violence.


The Legal Implications

From a legal perspective, Raja’s situation is serious. While athletes consent to physical contact within the rules of their sport, the law draws the line at excessive or intentional harm beyond what is reasonably expected. Courts have repeatedly held that consent does not cover actions “outside the ordinary scope of play.”


If police determine that Raja’s extra strikes constituted assault, he could face charges ranging from misdemeanour assault to felony assault, depending on the injuries sustained by Smith. Beyond criminal charges, Raja could also be sued in civil court for medical costs, damages and loss of income.


What Happens Next?

Knokx Pro Wrestling has made it clear that Raja will not return to their shows, and larger promotions like WWE or AEW are unlikely to take a chance on him while legal questions hang over his head. What was meant to be the beginning of his career could, in fact, become the end of it.


For now, all eyes are on the LAPD investigation and whether formal charges will be brought. If the case proceeds, it could be a defining moment not only for Raja Jackson but for the reputation of independent wrestling promotions, which must reassure fans and performers that safety remains a priority.

bottom of page