Written 9th February 2026 by Ruth Peters
Last week, the UK brought into force a new law that makes it a criminal offence to create “deepfake” intimate images without consent. This legislation closes a gap in existing law by targeting those who produce or even commission sexually explicit fake images of someone without their consent, not just those who distribute them. As a result, individuals and even companies involved in generating non-consensual deep-fake content can now be prosecuted.
This blog post explains the background of deep-fake technology, the motivations behind the new law, its key provisions and Olliers’ perspective on its implications for individuals and businesses. We also provide practical legal insight, highlighting potential concerns and steps to take if you are accused under this legislation.
What are “deepfakes”?
The term deepfake refers to synthetic media, typically video or audio, but also images created or altered using advanced artificial intelligence (AI). These AI tools can swap faces in a video, clone voices, or generate hyper-realistic images that appear to show someone doing or saying things they never actually did. In short, deepfakes can make fiction look convincingly like fact.
Originally, deepfake technology had some benign or creative uses (for example, in film special effects or satire). However, it has gained notoriety as a tool for malicious deception and abuse. In recent years we have seen high-profile examples: from forged videos of celebrities (like a convincing fake of actor Tom Cruise on TikTok) to a fabricated clip of Ukraine’s president seemingly yielding in the war. More alarmingly, deepfakes have been used in criminal scams, such as impersonating CEOs to defraud companies, and in political misinformation campaigns to manipulate public opinion.
One of the most harmful applications of deepfake technology has been the creation of non-consensual pornographic or sexually explicit images and videos. In these cases, a person’s face (often a woman’s) is digitally pasted into explicit content without their consent, creating a fake sex video or photo that can be indistinguishable from real imagery. Such deepfake abuse is not a prank ; it is widely recognised as a form of sexual violation and harassment. Unfortunately, this abuse has been on the rise: studies show an explosion in deepfake content online – an estimated 8 million deepfakes were shared in 2025, up from just 500,000 in 2023, with the vast majority believed to be pornographic and targeted at women.
Why was new deep fake legislation introduced?
Prior to this law, the UK already had offences covering some aspects of image-based abuse, often referred to as “revenge porn” laws.
Since 2015, it has been illegal to share private sexual photos or videos of someone without consent, and in 2023 the Online Safety Act explicitly extended this to sharing or threatening to share deepfake intimate images as well. However, creating a fake explicit image itself (as opposed to distributing it) was not a criminal offence when the person depicted was an adult.
This meant a malicious individual could digitally forge a nude image of someone without consent, and, if they kept it to themselves, no offence was committed under the existing law. This was widely seen as a loophole, especially asdeep-fake technology became easier to misuse.
Public pressure and campaigning played a huge role in driving reform. Victims, activists and women’s rights organisations had long argued that the law must catch up with technology to protect people (disproportionately women) from this form of abuse. The government acknowledged that sexually explicit images made without consent “constitute a fundamental violation of women’s autonomy and dignity”. In its “Plan for Change” addressing violence against women and girls, a priority was set to clamp down on emerging threats like deepfakes.
There have also been headline-grabbing incidents. Early this year, it emerged that users of an AI image generator (known as Elon Musk’s Grok chatbot on X, formerly Twitter) were producing nudified images of women and even child abuse images using the tool. This scandal prompted the UK’s media regulator, Ofcom, to open an investigation into X for potentially facilitating illegal content. The uproar around such incidents, implicating big tech platforms and revealing how easily these tools could be abused, added urgency to implementing the deep-fake legislation.
Key Provisions of the New Deepfake Legislation
The new deepfake law came into force on 6th February 2026, as an amendment to existing legislation. Section 138 of the Data (Use and Access) Act 2025) amended the Sexual Offences Act 2003 to create new offences in relation to the creation of deepfake intimate images without consent from the person depicted.
Here are the core provisions of this law and what they entail:
Creating a Sexual Deepfake without Consent is a Criminal Offence
It is now illegal to intentionally create a “purported sexual image” of someone without their consent, if it appears to show them nude or engaged in a sexual act. This offence covers not just wholly fabricated images but any digitally altered images (so conventional photoshop jobs are included alongside AI-generated deepfakes). Crucially, the law requires the image to be created intentionally, the person depicted in the image to not consent and for there to be no reasonable belief that they do consent.
This new offence carries a potentially unlimited fine upon conviction.
“Commissioning” Deepfakes is Also Illegal
The offence isn’t limited to the hands-on creator. The legislation also criminalises requesting or causing someone else to create a non-consensual explicit deepfake. In other words, if Person A asks or pays Person B to make a fake sexual image of Person C, both A (who instigated it) and B (who did the editing) could be guilty of a criminal offence under the law.
This ensures those who procure deepfakes (for example, paying a website to produce a fake pornographic video of an ex-partner) cannot escape liability by saying “I didn’t do it myself.”
Sharing or Threatening to Share Intimate Images (Real or Fake) – still illegal:
Importantly, the pre-existing laws against sharing intimate images without consent remain in force. Under the Sexual Offences Act 2003, it’s a crime to share private sexual photos or films of someone without consent, or to threaten to share them.
Sharing has a maximum sentence of up to 2 years’ imprisonment (and/or a fine) in most cases. If a deepfake image that was created is later shared, the culprit may face two charges, one for the creation and one for the distribution, potentially leading to a heavier overall sentence. Even if the deepfake isn’t shared publicly, threatening to share it (to blackmail or torment the victim) is itself an offence.
Does the new deep fake legislation deal with child sexual abuse material?
It’s worth noting that the law focuses on adults because creating or sharing sexual images of minors (under 18), real or fake, was already unequivocally illegal. UK child pornography laws cover “pseudo-photographs” of children, which includes AI-generated child sexual abuse material, and indeed there have already been prosecutions where offenders who made AI child-abuse images received very lengthy prison sentences.
So, the new deepfake offence mainly addresses the gap for adult victims who previously had no specific recourse if an image of them was faked.
Role of the Online Safety Act & Ofcom
To reinforce the impact of the new law, the government is leveraging the Online Safety Act 2023 regulatory regime. It has moved to designate the creation of non-consensual intimate images (deepfakes included) as a “priority offence” under the Online Safety Act. What this means is that online platforms, such as social media, websites, or any service hosting user content, have a legal duty to proactively prevent and swiftly remove any content that amounts to this offence. Platforms will need to detect and block deepfake pornographic content just as they must with, say, terrorist material or child abuse images.
Ofcom, as the regulator, can take enforcement action if companies fail to do enough. Essentially, the law isn’t just chasing individual culprits; it’s also pressuring tech companies to clean up and avoid facilitating deep-fake abuse. In the recent X (Twitter) case, we see this in action: Ofcom launched an investigation after reports of deepfake nudes on the platform, and the government made clear that both individuals and platforms would be held responsible for such content going forward.
Deepfake Detection and Industry Collaboration
Because deepfakes can be hard to detect enforcement is as much a technical challenge as a legal one. The UK is taking steps to boost detection capabilities and the government has convened a coalition of tech companies (including major players like Microsoft) and experts to develop a “deepfake detection evaluation framework”.
This framework will set standards and test the latest detection tools against real-world scenarios, for example, to identify AI-generated porn or financial scams. The aim is to pinpoint gaps where current detection tech falls short and push industry improvements. By establishing consistent benchmarks, platforms can be held to account for using effective detection methods.
Banning Deepfake Creation Tools
Another enforcement-oriented step in the pipeline is banning the creation tools themselves. “Nudification” apps or services, which automatically strip clothing from images, are a particular target. The government signalled it will make it a criminal offence to develop or provide these tools. Once that’s in effect, even companies outside the typical social media sphere (for instance, websites offering deepfake generation) could face prosecution if their software is used for abuse.
This approach goes after the supply side of the problem, not just the end-user. Making AI “people-picking” tools for sexual images will be outlawed. Enforcement would likely involve taking down such sites and penalising their operators, reducing the availability of easy-to-use deepfake generators available to the public.
Global and Cross-Border Issues:
Deepfake content online is a global problem. UK law applies to acts committed in the UK (or by UK persons). But what if a perpetrator is overseas? The law’s deterrent effect might not reach them directly, though platforms can still remove content globally. International cooperation and perhaps analogous laws in other jurisdictions will be key. Notably, the UK’s efforts are somewhat trailblazing, and other countries are also starting to crack down on deepfake porn. This means in the future, enforcement might involve cross-border investigations as well.
As this law is brand new, test cases are only just emerging. There have yet to be landmark prosecutions purely for creating a deepfake of an adult (since it only became illegal now). However, we can look at adjacent cases and controversies for insight into how the issue is unfolding:
AI-Generated Child Abuse Cases
Even before the new law, there have been serious cases involving AI-generated imagery, but these dealt with child sexual abuse material (CSAM).
For instance, in late 2024, a man in Bolton was convicted for using AI to create indecent images of children; he received an 18-year prison sentence. In another case, a man in Sussex found with thousands of AI-made child abuse images was sentenced to several years in jail. These cases got heavy sentences under existing child pornography laws, emphasizing that the justice system treats AI-made images of minors as seriously as “real” abuse images.
While these aren’t about adult deepfakes, they underscore the courts’ intolerance for AI-facilitated sexual crimes, and they likely paved the way for recognising harm in adult deepfakes too.
The Grok/X Controversy
Elon Musk’s platform X (Twitter) became a flashpoint for the deepfake debate in late 2025 when reports emerged of users abusing a new AI image tool called Grok. According to Ofcom and the Internet Watch Foundation, offenders openly bragged on dark web forums about using Grok to generate nude images of women and even child sexual imagery. The fact that a mainstream platform’s tool could be misused to that extent was alarming.
Ofcom’s investigation into X coincided with ministers pushing the deepfake law into force, making it a clear example where regulatory and legal actions converged. Musk, for his part, sparked controversy by responding that critics “just want to suppress free speech”. This raises a debate: some in the tech sphere worry that anti-deepfake laws might overreach and stifle expression or innovation.
However, the UK government’s stance is that creating sexual images of someone without consent is not a free speech issue, but a form of abuse. The public reaction largely sided with protecting victims; many officials condemned X’s slow response and even suggested boycotts of the platform. This controversy highlights how platform responsibility and freedom of expression arguments will continue to be a balancing act. But given the specific, intimate nature of the content targeted by this law, any “free speech” defence is likely to find little sympathy in UK courts.
Olliers Solicitors
If you are under investigation for a deep-fake-related offence (or fear you could be) seek legal advice immediately. Having a solicitor who understands both the technology and the law (like those at Olliers) is crucial. Each case will turn on specifics, so ensure you get tailored advice. Our team is experienced in emerging digital offences and can guide you through police interviews or court proceedings, ensuring your side is heard.
If you have specific concerns or need advice regarding this new legislation as someone accused, Olliers’ specialist solicitors are here to help. Feel free to contact us for a confidential discussion of your situation and the best way forward under the new deep-fake regime. The legal landscape may be evolving, but our commitment to protecting our clients’ rights remains constant.
We understand that facing allegations of a sexual offence can be overwhelming and deeply distressing. At Olliers, we understand the emotional toll and the importance of having someone in your corner who will listen, support and guide you through every step of the legal process. Our specialist team offers confidential, compassionate and expert advice tailored to your circumstances. Contact us today by completing the form below, telephoning 0161 8341515 or by emailing info@olliers.com. We’re here to help, without judgement and with the utmost discretion.
Manchester
Head Office
- 0161 8341515
- info@olliers.com
- Fourth Floor, 44 Peter Street, Manchester, M2 5GP
- About the Author
- Latest Posts
Ruth leads the business development team at Olliers across all areas of specialism. Ruth was the Manchester Legal Awards 2021 Solicitor of the Year.
She has been with the firm for more than 20 years and has an enviable level of experience across the entire spectrum of criminal defence.
