The UK’s new deepfake laws: what is now illegal, what it means in practice, and what could come next
- Paul Francis
- Feb 17
- 5 min read
Deepfakes have moved from a niche tech trick to something people can create on a phone in minutes. The UK is now tightening the law to deal with the most harmful uses, especially sexually explicit deepfakes made without consent. The headline is simple: the UK is moving from “it’s illegal to share” to “it’s illegal to make” in key scenarios.

What the law already covered (before the newest changes)
Before the current push, UK law already targeted intimate image abuse. Under changes made via the Online Safety Act, the Sexual Offences Act 2003 was updated to criminalise sharing or threatening to share an “intimate photograph or film” without consent, and that includes content that “appears to show” someone, which is where sexually explicit deepfakes fit.
So if someone made a sexually explicit deepfake and posted it, sent it, or threatened to leak it, there has already been a clear criminal route for prosecution.
What the UK is adding: making sexually explicit deepfakes illegal to create
The big gap that campaigners and MPs kept pointing to was this: sharing could be an offence, but creating a sexually explicit deepfake was not always directly captured.
The government has tabled changes to criminalise the intentional creation of sexually explicit deepfakes without consent, with tests around intent and consent. In plain English, if you generate a sexually explicit deepfake of a real adult without their consent, you are moving into criminal territory even if you do not publish it.
The government has also publicly stated that creators of sexually explicit deepfakes could face prosecution, and referenced sentences of up to two years as part of the package being pursued through forthcoming legislation.
The “caught out” part: how ordinary people can stumble into an offence
A lot of people hear “deepfake law” and think it only applies to hardcore offenders. The reality is that the new direction of travel raises risk for a wider group, because creation itself becomes the focus.
Common ways people could get caught out:
Using “nudify” or face swap apps on someone you knowIf the output is sexually explicit and the person did not consent, “it was a joke” is not a magic shield. The government has explicitly called out nudification style tools in its crackdown messaging.
Making it privately and never posting itThe whole point of the new creation offence is to cover scenarios where the harm occurs even if the image is never uploaded.
Commissioning or requesting someone else to generate itPeople often think liability stops with “the creator”. In practice, investigators look at who asked, who paid, who supplied images, and who directed the result. The policy intent is to clamp down on the behaviour end to end, not just the final upload.
Assuming “public photo” means “public permission”A selfie on Instagram is not consent for someone else to turn it into explicit material. The consent standard is central to both the sharing offence and the proposed creation offence.
Keeping it “semi private” in group chatsSharing to even a small group can still be sharing. If it spreads further, your risk rises fast because investigators can follow the distribution trail.
How enforcement can happen in the real world: digital forensics on phones and laptops, app logs, payment trails, cloud backups, chat exports, plus platform reports. Also, because platforms have stronger duties under the Online Safety Act, takedowns and reports can happen faster, which can also create evidence trails sooner.
How this could start affecting AI art and creators
Most people making AI art are not trying to abuse anyone, but the line gets blurry when AI art uses real faces, real bodies, or “looks exactly like” a real person.
Here is a practical way to think about it:
Lower risk AI art use
Fully fictional characters or clearly stylised outputs that do not map onto a real person
Licensed models, model releases, or explicit written consent
Editorial or educational demonstrations that use synthetic, non-identifiable faces
Higher risk AI art use
Photorealistic outputs that use a real person’s likeness, especially if sexualised
“Make my ex nude” style prompting, even if you never post it
“Parody” claims where the output is still explicit and identifiable
Even if a creator thinks they are making “art”, the law is increasingly focused on consent and harm, not the label on the output. The government’s stated intention is specifically about sexually explicit deepfakes without consent.
Good creator hygiene going forward (simple and realistic):
If it is a real person, get explicit consent, in writing if possible
Avoid sexualised likeness work entirely unless you are working with a consenting adult model under a clear agreement
Keep prompt records and consent records for commercial work
Consider watermarking or clear labelling for AI generated content where appropriate (this is not a legal shield, but it helps reduce deception risk)
What the Online Safety Act Really Means
The Online Safety Act is less about banning everything and more about forcing platforms to do risk management properly.
Two rollout dates matter:
17 March 2025: platforms have a legal duty to protect users from illegal content, aligned to Ofcom’s first codes of practice.
25 July 2025: platforms have a legal duty to protect children, including using “highly effective” age assurance for porn and other harmful content.
Ofcom is the regulator, and the enforcement toolkit is serious, including very large fines and, in extreme cases, service restriction.
What else is in the pipeline (and why people are watching closely)
The deepfake changes are not happening in isolation. The UK is also signalling broader moves under the Online Safety regime and related bills.
1) Bringing AI chatbots explicitly into scope
Following the Grok scandal, the government is moving to make sure AI chatbots are explicitly covered by Online Safety duties, so chatbot providers can be held accountable if they fail to prevent illegal harms.
2) Bigger child safety restrictions, including under-16 access debates
There is active discussion and consultation activity around restricting under-16 access to certain services and features, and even looking at VPN workarounds.
3) Stronger measures around self-harm content and safety-by-design
Parliamentary and regulatory pressure is pushing toward more proactive obligations, not just reacting after harm spreads. Ofcom’s codes and regulatory documents are already setting the direction of travel.
Why This Is the Right Move, With Caveats
I think criminalising non-consensual sexually explicit deepfakes is a good and necessary step. It targets real harm, closes an obvious loophole, and gives victims better protection.
At the same time, I am wary about what could be restricted next, especially if regulation expands in ways that accidentally sweep up legitimate creative work, commentary, satire, or benign AI art. The key will be whether future changes stay tightly focused on consent, harm, and clear illegal conduct, rather than drifting into broad controls on speech or creativity.




