The New Age of Digital Danger: Why Cybersecurity Fears Are Rising Across the UK
- Paul Francis

- 1 day ago
- 5 min read
Cybercrime in the UK has entered a new phase. Once dominated by obvious phishing emails and fake phone calls, online fraud has evolved into a sophisticated ecosystem powered by artificial intelligence, deepfake video, cloned voices and social media adverts that look almost identical to legitimate campaigns. The result is a surge in public concern, with recent research showing that British consumers feel more vulnerable to digital threats today than at any point in the last decade.

A new survey by Mastercard reveals that nearly three quarters of UK respondents are now more worried about cybersecurity than they were two years ago. This growing anxiety reflects a shift in the digital environment, where fraudsters are no longer amateurs sending poorly written emails, but coordinated groups using commercial-grade technology and advertising platforms to target victims at unprecedented scale.
This article looks at why concerns are rising, who is being targeted, and how AI, fake adverts and social media platforms have become central to modern scams.
The Surge in Cybersecurity Fear
The 2025 Mastercard study paints a clear picture of a public increasingly anxious about online safety. According to their findings:
74 percent of UK respondents feel more concerned about cybersecurity today than two years ago.
More than half of Millennials and Gen Z have discussed cybersecurity with friends or family recently, suggesting a sharp rise in everyday awareness.
Many participants believe AI will make it harder to distinguish genuine online content from fraudulent material.
This rise in concern is not misplaced. Cybercriminals now use tools that can generate realistic imagery, video and audio at scale, helping scams spread faster and become more convincing. As the technology becomes cheaper and easier to use, the number of attacks grows.
AI and Deepfake Scams Enter the Mainstream
In the last 18 months, the UK has seen a wave of high profile cases that highlight how AI is transforming online crime.
The Arup Deepfake Fraud
In early 2025, engineering and design firm Arup suffered a loss of more than twenty million pounds after an employee was tricked by an AI-generated video call impersonating company leadership. The scammers used deepfake technology to mimic real executives, convincing staff to authorise a major transfer.
This case became a global warning that deepfake scams are no longer theoretical. They can deceive trained professionals inside major organisations.
Deepfake Celebrity Adverts
Fraudsters are now using AI-generated adverts featuring well known public figures to promote fake investment schemes. In the UK, Martin Lewis was again used without permission in a deepfake crypto scam. Dozens of people believed the video was genuine and lost money.
These adverts often appear on social platforms, where they look polished enough to pass as legitimate marketing campaigns.
Voice Cloning Scams
Surveys show that one in four UK consumers has now received a scam call that appears to use AI-generated or cloned voices. These calls often claim to be from banks, government bodies or service providers. The realism of synthetic voices makes them far more convincing than traditional scam calls.
These developments explain why public anxiety is rising. The threat has become harder to detect using traditional “trust your instincts” advice.
Why Millennials Are Becoming Prime Targets
Historically, older adults were considered the most vulnerable to online fraud. In 2025, the trend has shifted. Fraudsters increasingly target Millennials and younger adults because:
they spend more time on social platforms where scam adverts run
they trust online shopping and digital adverts more readily
they often respond quicker to promotional content
impersonation scams can exploit their familiarity with video-first platforms like Instagram, TikTok and Snapchat
Mastercard’s research also suggests that younger adults talk more frequently about cybersecurity because they feel more exposed to digital risk.
Social Media Platforms and Their Role in Scam Adverts
Few factors have alarmed cybersecurity experts more than recent revelations about Meta, the parent company of Facebook and Instagram.
A 2025 Reuters investigation revealed:
Meta’s internal estimates suggested it earned around 10 percent of its 2024 revenue, roughly sixteen billion US dollars, from fraudulent or banned-goods adverts.
Users across Meta’s platforms were exposed to as many as 15 billion higher risk scam adverts every day, according to leaked documents.
Regulators in the United States are now calling for formal investigations into how these adverts spread so widely.
These findings do not mean Meta actively encourages scams, but they highlight a fundamental challenge: the more advert revenue a platform earns from fraudulent activity, the harder it becomes to eliminate it without impacting profit.
For UK consumers, this means a significant number of fraudulent adverts are being delivered directly through feeds and Stories on social apps that most people use daily.
The UK Landscape: Why the Fear Is Justified
Cybercrime in Britain has grown sharply in the past two years. The increase is fuelled by several converging trends:
AI tools that generate realistic human voices, faces and videos
cheap access to software designed to spoof legitimate websites
social platforms overloaded with unregulated third-party adverts
wider use of online shopping where ghost stores can appear overnight
criminals using mass automation to target thousands of people at once
UK regulators have issued repeated warnings about Christmas shopping scams, investment fraud, fake celebrity endorsements and misleading adverts. Consumers who believe they are digitally literate can still fall victim because the scams look almost identical to genuine content.
Why This Matters for Everyday Users
The rise of AI-enabled fraud directly affects British consumers in three ways:
1. Scams are more believable
A deepfake video, an AI-generated image, or a cloned voice gives scammers the power to impersonate anyone from a family member to a public figure.
2. Scams are more widespread
Automation lets scammers target thousands of people simultaneously across platforms, emails and messaging apps.
3. Scams are more profitable
With billions of adverts circulating on social media, fraudulent campaigns can run for days before being removed, generating significant revenue for criminals.
The average person may not even realise they have been targeted, because exposure is now part of normal online browsing.
The rapid rise of AI in everyday technology is reshaping the cybersecurity threat landscape in the UK. Deepfake video calls, fake celebrity adverts, ghost stores and voice cloning are no longer unusual. They are now part of the toolkit used by modern fraudsters.
The Mastercard survey shows that public anxiety is rising, and the evidence suggests that this concern is justified. If scammers can reach millions of users through adverts on major platforms, and if AI tools can replicate human behaviour with high accuracy, then consumers need stronger protections and better awareness.
The challenge ahead is significant. As AI continues to improve, the boundary between real and fake content will blur even further. What matters now is understanding the risk and building the skills, safeguards and regulations necessary to counter it.







