AI‑Generated Indecent Images: How the Law Is Changing and What It Means for You

Written 7th May 2026 by Ruth Peters

The law around indecent images is evolving fast to keep up with artificial intelligence. Today’s AI tools can create incredibly realistic sexual images and videos, including of children, from scratch or by manipulating innocent photos. These “synthetic” indecent images raise urgent questions: is it illegal to make or possess them if no real child was physically harmed? What about AI “deepfakes” that put someone’s face into a sexual image? The short answer is yes – UK law already criminalises these images, and new offences are being added to close any perceived loopholes.  

For anyone worried they might have done something wrong, or professionals tracking this area, this blog explains the current law and upcoming changes on AI-generated indecent images, with a defence perspective. We’ll break down what counts as an indecent image of a child (including AI-generated “pseudo-photographs”), how deepfake sexual images are treated, and the penalties one could face. We’ll also highlight key legal updates, including the Online Safety Act 2023 and new offences introduced in 2026 so you understand where the law stands now. 

If you or someone you know is under investigation involving AI-generated images, it’s vital to know your rights and the law’s scope. UK authorities have made it clear that AI or not, indecent images are treated as seriously as “real” ones. Ignorance or the hope that “no real victim” means no crime could lead to severe consequences, including prison and sex offender registration.  

AI and Indecent Images: A Quick Timeline of UK Law Evolution 

2026 – Data (Use and Access) Bill (amendment)

In February 2026, a new offence of creating sexually explicit deepfake images without consent was introduced. It is now illegal to intentionally create a "purported sexual image" of someone without their consent, if it appears to show them nude or engaged in a sexual act under section 66E of the Sexual Offences Act 2003. This offence covers not just wholly fabricated images but any digitally altered images. Punishable by an unlimited fine upon conviction.

2023 – Online Safety Act 2023

Overhauled image abuse offences. It amended the Sexual Offences Act 2003 to explicitly cover deepfakes and other intimate images "that appear to show" someone without consent. New offences for non-consensual intimate images and threats to share them were introduced (max 2 years' imprisonment).

2019 – Voyeurism (Offences) Act 2019

Criminalised "upskirting" (taking images under someone's clothing without consent). Again, not directly about AI, but part of broadening image abuse laws as technology poses new risks.

2015 – Criminal Justice and Courts Act 2015

Made it an offence to share private sexual photographs or videos without consent (the original "revenge porn" law). Though focused on real images of adults, it signalled growing legal attention to image-based abuse and would later inform deepfake laws.

2009 – Coroners and Justice Act 2009

Created an offence of possessing "prohibited images of children". This targets non-photographic indecent images (like cartoons or CGI) that don't look like actual photos – further proof that any depiction of child sexual abuse, no matter how created, is illegal.

1988 – Criminal Justice Act 1988

Introduced a specific offence of possessing an indecent photograph (or pseudo-photograph) of a child. This closed a gap, ensuring even just possession (with no distribution) is criminal.

1978 – Protection of Children Act 1978

Made it illegal to take, make, or distribute indecent photographs of children under 18. This law (as later amended) also covers "pseudo-photographs" – images that look like real photos of children (e.g. computer-generated).

UK law has consistently moved to catch up with technology – from covering computer-generated child images decades ago to tackling today’s AI deepfakes. The pattern is clear: if technology enables sexual exploitation or abuse imagery, the law adapts to criminalise it. So, if you’re wondering whether AI-generated indecent images are a legal grey area, they are not – they fall under existing criminal offences, and recent law reforms are making enforcement even tougher.  

What Are “AI-Generated” Indecent and Intimate Images? 

AI-generated indecent or intimate images are sexual or pornographic images created or altered by artificial intelligence.  

They typically fall into two broad categories: 

  • AI-created child abuse images (AI-generated CSAM): These are images of children (or who appear to be children) in sexual contexts, produced by AI. Some are made from scratch by image-generation models following user prompts, meaning no real child directly appears in the image. Others are made by manipulating real photos of children, like taking an innocent family photo and using AI to create a sexualised fake from it. 
  • AI “deepfakes” (non-consensual intimate images of real people, often adults): These involve using AI (often deep learning algorithms) to swap or synthesise a person’s likeness into sexual content. For instance, someone might take a photo of a colleague or ex-partner and use AI to generate a pornographic image or video featuring that person’s face. The same technique can be used on celebrities or anyone else. Deepfakes can also be entirely fictitious, merging features from various sources to create a realistic-looking person in a sexual scenario – but often the term refers to when a real person’s identity is faked into the content.  

Why are these images a concern?  

Historically, indecent images of children (often called IIOC or CSAM) involved real victims. AI changes the dynamic: someone can produce abusive images “on demand”, perhaps thinking “no actual child was harmed, so maybe it’s legal”. Likewise, a malicious person can violate someone’s privacy and dignity by making a sexual deepfake without ever meeting them.  

This new capability has led to a surge in such content: the Internet Watch Foundation (IWF), which works to identify and remove child abuse images, saw a 380% rise in AI-generated abuse reports from 2023 to 2024, and more than 20,000 AI-made child abuse images posted on a dark web forum in one month. The content can be extremely graphic, with a noted increase in Category A (the most severe) AI images. 

Crucially, the law does not excuse AI-made images just because of how they were created. As the Crown Prosecution Service puts it:  

“Technology exists to alter photographs to appear AI-generated… The law applies equally to photographs and pseudo-photographs, regardless of their creation method”.  

In other words, if the image itself is illegal (e.g. an indecent image of a child), it doesn’t matter if it’s AI or a camera – it’s still an illegal image. 

Is it illegal to create or possess AI-generated indecent images? 

Yes. In the UK, AI-generated indecent images are illegal to create, share, or possess. The Protection of Children Act 1978 (PCA 1978) and related laws make this clear. Under Section 1 of PCA 1978, it is an offence to “take, make, distribute, or possess with intent to distribute any indecent photograph or pseudo-photograph of a child. Here, “pseudo-photograph” explicitly means any image (however created) that looks like a photograph of a child. In plain terms, an AI-generated realistic image of a child being abused is treated as a “pseudo-photo” and thus as criminal as a real photo.   

Additionally, Section 160 of the Criminal Justice Act 1988 makes it a separate offence to possess an indecent photograph or pseudo photograph of a child.   

It doesn’t matter that “no child was physically abused in making the image.” UK law and courts operate on the principle that such images themselves represent a form of abuse and exploitation of children. They fuel demand for child sexual content and can lead to real children being targeted. Offenders might also use real children’s likenesses (like social media photos) to create these images, which effectively victimises those children in a new way. So, from a policy perspective, AI images are not a “victimless crime” and the law reflects that by criminalising them just the same as real images.  

What about AI “cartoon” or obvious CGI images of children?  

Even if an image isn’t photorealistic (say, a crudely drawn but indecent cartoon of a child), the law covers that too. Since 2010, Section 62 of the Coroners and Justice Act 2009 created the offence of possessing a “prohibited image of a child”, which includes non-photographic indecent images(like certain cartoons or drawings).  

Essentially, either an image is realistic enough to count as a pseudo-photograph, or if not, it might fall under prohibited image law – either way, possessing or creating it is illegal. 

What about deepfakes of adults?  

If you create a sexual deepfake of an adult without their consent, until recently it wasn’t directly a crime simply to create it (provided it wasn’t distributed). However, as of 2026, a new law (introduced via the  Data (Use and Access) Act) has been introduced to criminalise making explicit deepfakes without consent.

It is now illegal to intentionally create a “purported sexual image” of someone without their consent, if it appears to show them nude or engaged in a sexual act under section 66E of the Sexual Offences Act 2003. This offence covers not just wholly fabricated images but any digitally altered images (so conventional photoshop jobs are included alongside AI-generated deepfakes).    

Before that change, the law only kicked in if you shared or threatened to share such images (under the “revenge porn”/intimate image abuse laws). Now, even just making a deepfake of someone without their consent, without intending to share, is a criminal offence. This aligns with how child pseudo-images have long been outlawed, reflecting that even the creation is harmful.  

If an indecent image depicts a child (real or AI-generated), or if it’s a sexual image of an adult created without consent, it’s within the scope of UK criminal law. Law enforcement and prosecutors regard AI-generated child sexual abuse material (CSAM) as equally serious as traditional CSAM, and recent reforms ensure adult deepfake abuses are also directly punishable. 

No loopholes for AI: it's already illegal

UK law clearly covers AI-generated indecent images – they are treated as "pseudo-photographs" of children under the Protection of Children Act 1978. So even if no real child was used, creating or possessing such images is a crime with penalties up to 10 years in prison.

Deepfakes of adults – now also criminal to create

New UK legislation (2026) means it's no longer only sharing a fake intimate image that's illegal – just making a sexual deepfake of someone without consent will itself be a criminal offence. This reflects the law's shift to punish the act of creation, not just distribution.

How the Law Handles AI-Generated Indecent Images vs Real Images 

From a defence perspective, a crucial question is: Are AI-generated images treated the same as “real” indecent images of children, even at the sentencing stage?  

Yes – largely they are treated the same, because the law doesn’t distinguish the harm by the image’s origin. The content and context (what’s depicted, how widely shared, the offender’s intent) tend to influence the outcome, not whether an image was computer-generated or not. 

Child deepfake images 

In court, AI-made images (pseudo-photos) are classified into the same Category A/B/C scale as actual child photographs, based on the severity of the content. The Sentencing Council guidelines apply equally to pseudo images as to real images. For example, an AI image depicting a child in penetrative sexual activity is Category A (most serious), just like a real photo would be. A person convicted of making or distributing such images faces up to 10 years’ imprisonment (the statutory maximum under PCA 1978), and likely a multi-year sentence depending on volume and categories (often those dealing in AI images also have numerous images). In fact, recent cases confirm courts’ hard line:  

In R v Jaycock (2024), a man created indecent images by AI face-swapping children’s faces onto adult pornography. The Court of Appeal held these were correctly treated as “production” offences (i.e. making new images) with a high sentence range. He ultimately received 3 years 10 months in prison for a small number of AI images, illustrating that even a few generated images can yield years of custody. The judges noted such deliberate digital creation is a serious crime, exploiting children’s likenesses for sexual gratification.  

Another case in 2024 at Bolton Crown Court saw a defendant use AI to turn everyday pictures of real children into child abuse images, which he then sold to other offenders. He was caught and handed an 18-year sentence (plus 6 more on extended licence), reflecting the very high seriousness when such material is produced and distributed at scale.  

These outcomes underscore: from the law’s view, AI images of child abuse are child abuse images. A police spokesperson put it bluntly after a 2025 sentencing: “This depraved act involving the use of AI is treated just as severely as using actual photographs”.  

Adult deepfakes images 

Historically, if you made a deepfake of an adult but didn’t share it, there was no specific offence, though other legislation like harassment or obscene publications might have been considered. But as soon as you sent or posted it, it became illegal under image abuse laws (the 2015 “revenge porn” law and now updated offences in 2023).  

With the new deepfake creation offence, creating an explicit fake of someone without consent, even if it stays on your device, can lead to a prosecution. That offence, once active, carries an unlimited fine on conviction.  Also, in any case with a named victim (like a deepfake of an actual person), expect restraining orders or other protective orders in addition to criminal punishment.  

Will I go to prison for AI generated indecent images? 

The sentencing guidelines for indecent images of children are well-established (with factors like the number of images, categories, distribution, etc., determining length of sentence). There is no separate guideline for “AI images” – they are encompassed by the existing framework. If anything, defence lawyers may argue that because no real child was directly harmed, a sentence could lean towards the lower end of the range. However, courts have so far not shown much leniency on that basis. The presence of AI might be seen as an aggravating factor if it indicates advanced planning or potential distribution at scale.  

In sum, the law’s approach is to equate AI indecent images with real ones, from criminal liability to sentencing. For someone accused, that means you should not expect any “technicality” to lessen the seriousness: an AI category A image will be treated as category A offences. 

Can the Police Detect AI-Generated Images? 

Yes, and they are actively trying to identify and stop AI-generated indecent images. The truth is, many offenders are caught not because the police spot an image is AI, but because of digital traces or tips (like an IP address, an undercover operation, or a referral from the IWF). Once an investigation starts, forensic analysis of devices often reveals the use of AI tools or the presence of certain image files, even if the images look “real” to the naked eye.  

That said, AI can indeed make detection harder in some ways: 

  • Police officers have admitted that some AI images are so lifelike, it’s difficult to visually tell them apart from real photos. If images are discovered on a suspect’s device, forensic analysts may use additional clues (like metadata, known hashes of real images, or evidence of AI software usage) to determine their origin. However, as far as the criminal charge goes, it doesn’t particularly matter if they confirm the image is AI or not – it’s illegal either way, and often multiple images (some real, some AI) are found together.  
  • The Internet Watch Foundation (IWF) and other organisations are now specifically tracking AI-generated CSAM online. They have started flagging content that appears synthetic and working with tech companies. The IWF’s recent data show they categorized hundreds of AI images (and often can recognise patterns). This means that if you share AI abuse material online, you stand a good chance of it being detected and traced back via known websites or networks.  
  • Generative AI tools themselves leave some fingerprints. Large image-generation models might have subtle artifacts or known quirks, and researchers are developing detection algorithms to spot an AI-generated image. The UK government, under the Online Safety framework, is pushing platforms to use such technologies and expedite removal of both real and deepfake images.  

So, while an AI image might fool the human eye, police and experts have many ways to uncover the crime. One scenario is that someone could be arrested for unrelated online activity (say, a message or tip-off), leading to device seizures that reveal AI child abuse images. Devices likely contain evidence of the generation process (like installed AI software, prompt history, cached files), making it even clearer. In summary: you should assume AI images are discoverable and prosecutable – there’s no “safe hiding” behind technology. 

For those worried about inadvertently stumbling on such content: If you ever encounter what looks like an AI indecent image (for example, forwarded in a group chat), do not forward it or save it. The best action is to delete it and consider reporting to authorities (IWF has a hotline). Possession itself is an offence – but authorities focus on those deliberately generating or collecting such material rather than incidental recipients. If in doubt, seek legal advice immediately on how to protect yourself. 

What’s Changing? 

We’ve touched on new laws above, but let’s clearly outline recent and upcoming legal changes specifically relevant to AI-generated sexual images: 

  • Online Safety Act 2023 – This landmark Act (which received Royal Assent in October 2023) did many things, but the crucial part was creating new intimate image abuse offences and updating existing law. It inserted new sections 66B, 66C, 66D into the Sexual Offences Act 2003, covering sharing intimate images without consent. These offences explicitly include images that “appear to show” a person in an intimate situation. That phrase was chosen to ensure that deepfake images (which appear to show someone naked or in a sexual act) are covered, even if they are fake. The Act also criminalised threatening to share intimate images (closing a gap in the old law).  
  • New offence of creating deepfakes (2026) – As noted, by early 2025 the government announced a standalone offence to criminalise intentionally making a sexual deepfake without the depicted person’s consent. This targets situations where someone might create a humiliating nude image of someone just to harass or gratify themselves, without necessarily disseminating it. This was introduced in February 2026. It means if you privately create a deepfake of Person X, you could be charged even if you never hit “send”. And if you do share it, then you could be charged with both the creation offence and the sharing offence from the Online Safety Act.  
  • Focus on AI tools  – The offence isn’t limited to the hands-on creator. The new 2026 legislation also criminalises requesting or causing someone else to create a non-consensual explicit deepfake under section 66F of the Sexual Offences Act 2003. In other words, if Person A asks or pays Person B to make a fake sexual image of Person C, both A (who instigated it) and B (who did the editing) could be guilty of a criminal offence under the law.   This ensures those who procure deepfakes (for example, paying a website to produce a fake pornographic video of an ex-partner) cannot escape liability by saying “I didn’t do it myself.” This signals the intense direction of travel: not just punishing images, but also those enabling their creation.  
  • Online Safety regulatory changesUnder the Online Safety Act, new rules mean platforms and tech companies now have legal duties to swiftly remove indecent images (including deepfakes). For instance, from late 2024, “sharing intimate images” was made a priority offence that platforms must address. This regulatory environment means it may become harder for such content to circulate openly online (forcing it further underground, where law enforcement focus increases).  

What does this mean for individuals?  

Essentially, the net is tightening. By 2026, nearly every corner of “image-based sexual abuse” – whether real or AI, child or adult – is  explicitly illegal. Defence solicitors must stay up to date because cases may involve brand-new laws or charges. If you are investigated, you need a legal team who understands these developments and can navigate the intricacies (for example, challenging evidence of who created an image, or whether the prosecution can prove intent if the law requires it).  

If You’re Under Investigation for AI-Generated Images 

Facing a criminal investigation for indecent images or deepfake offences is incredibly daunting. If AI-generated images are involved, the case can be complex, but a strong defence is possible. Here’s what you need to know and do: 

  1. Seek Specialist Legal Advice Immediately: As soon as you suspect you’re under investigation (police contact, device seizure, etc.), contact a specialist solicitor experienced in indecent images and cyber offences. These cases involve technical evidence (forensic computer analysis, possibly AI tools usage logs) and rapidly evolving law, so a specialist is essential. Do not attempt to “explain it away” to police without a lawyer present. Even seemingly innocuous statements (like “I was just experimenting with an AI model”) can seriously harm your defence later. 
  2. Understand the Scope of Potential Charges: Based on current law, you could face multiple charges: for instance, one charge for “making indecent images” (if any AI child images are found) and if adult deepfakes of specific people are found, you could face charges under the new intimate image abuse offences for sharing (if applicable), or eventually the creation offence. Each has different elements (for example, the deepfake creation offence requires proof you lacked the person’s consent and did it for sexual gratification or to cause distress). A seasoned defence lawyer will parse these details and possibly argue that some images don’t meet a certain offence definition.  
  3. Preserve Evidence and Context: Offenders with AI images sometimes also have legal but contextually relevant materials, for example, artificial “nudifying” apps, image editing software, or benign images that were used as bases. Don’t delete or destroy anything after the fact (that can be a separate offence and make you look worse). Instead, let your solicitor know of anything that could help explain or contextualise the allegations. For example, if an AI model inadvertently produced something you didn’t intend, or if you were engaged in research, these could be important points to raise (though they may not excuse possession, they could influence decisions). 
  4. Expert Involvement: With complex computer evidence, defence teams often engage independent computer forensic experts who can review how images were created or stored, and AI experts if necessary. They might check if image detection technology could mistakenly flag something(especially important if you genuinely didn’t know something was on your device). In deepfake cases, we might need experts to discuss face-mapping techniques or authenticity, especially if there’s a dispute whether an image is a deepfake or genuine. This can raise reasonable doubt or affect the prosecution’s narrative. 
  5. Pre-Charge Representations: At the pre-charge stage, a proactive defence solicitor can make representations to the police or CPS. For example, your lawyer could highlight any lack of evidence of distribution or argue that images found might not meet the legal definition of “indecent” (in borderline cases) or that the case lacked a reasonable prospect of onviction.   In other cases the evidence may be strong but your legal team could argue that it is not in the public to prosecute and that an out of court disposal such as a caution may be suitable. The aim is to influence the charging decision, possibly persuading them to proceed with lesser charges or none at all if the case is weak. 
  6. Prepare for Ancillary Consequences: If charged, besides the core penalties, Sexual Offences Prevention Orders or Sexual Harm Prevention Orders often accompany indecent image convictions (restricting internet use, etc.), and Sex Offenders’ Register notification is typically mandatory.  

Ultimately, every case is unique. A robust defence will challenge the prosecution to prove every element, from proving the images are indecent by legal standards, to proving you made or possessed them knowingly. The evolving nature of AI may offer novel defence angles – perhaps questioningwhether you had the intent to create illegal images if an AI unexpectedly generated them. A top defence team stays at the cutting edge of such arguments. 

Is it illegal to create AI-generated indecent images in the UK?

Yes. Under UK law, creating (or “making”) indecent images of children is illegal, even if those images are generated by AI. The Protection of Children Act 1978 covers any indecent photograph or “pseudo-photograph” of a child. A pseudo-photograph includes computer-generated images that look like a photo – which covers AI creations. In short, producing an AI child sexual abuse image is treated the same as producing a real one. 

Can you be charged for AI-generated CSAM?

Absolutely. CSAM means Child Sexual Abuse Material, and UK authorities will charge you for it regardless of how it was made. If you create, download or possess AI-generated child abuse images, you can be charged with offences like “making indecent images of children” or possession of indecent images. And if you share or distribute them, that’s an additional offence with very serious penalties (often custodial sentences).

How does the law treat AI deepfake indecent images?

Non-consensual sexual deepfakes (e.g. putting someone’s face in porn) are now firmly within the law’s reach. Sharing or threatening to share a deepfake is illegal under the Online Safety Act 2023, which updated the Sexual Offences Act 2003 to cover images that “appear to show” a person in an intimate state without consent. Also, a new law in 2026 makes it an offence just to create a sexually explicit deepfake without consent (even if not shared). So law treats deepfakes as a form of intimate image abuse – criminal if done without the depicted person’s permission.

Are AI-generated images classified the same as real images for sentencing?

Yes. For child indecent images, AI-generated pictures are generally treated the same as real ones during sentencing. They are categorised by severity (Category A, B, C) and sentenced according to guidelines based on those categories, number of images, etc. Courts do not usually give a “discount” just because an image is AI – they consider the content and the offender’s actions, which means AI Category A images attract sentences comparable to real ones. For adult deepfakes, penalties are evolving, but offences like sharing deepfakes currently carry up to 2 years in prison, and the new creation offence carries an unlimited fine.

What is the Online Safety Act’s position on AI indecent images?

The Online Safety Act 2023 mainly tackled non-consensual intimate images (often involving adults). It ensures that deepfake intimate images are covered by law – specifically by expanding offences to include images that “appear” to show someone in a sexual context. It also obliges social media platforms to swiftly remove such content. For child AI images, the Act didn’t directly change child abuse image laws (those were already strong), but it complements them by strengthening the overall online enforcement environment.

Can police detect AI-generated indecent images?

Yes, and they are constantly improving methods to do so. While AI images can look very realistic (even fooling the eye), forensic tools and proactive monitoring (like by the IWF) are catching many AI-generated images. If illegal images are on your devices, standard digital forensics will find them just as easily as any other file. Also, AI creation often leaves tell-tale traces (like software or prompts), so investigations can uncover the source. In short, don’t assume an AI-generated image is “invisible” to police – it isn’t.

What are the penalties for creating AI-generated indecent images?

If we’re talking AI-generated indecent images of children, the penalties are the same as for any indecent images of children. Making or distributing such images (pseudo-photos) can lead to up to 10 years imprisonment on indictment. Possessing them carries up to 5 years (under s.160 CJA 1988). Sentences depend on quantity and category (for example, producing a large volume of Category A AI images could draw many years in prison). For creating a sexual deepfake of an adult, the new offence carries an unlimited fine, and if you also shared it, up to 2 years in prison for the sharing offence. Additionally, any conviction will likely require sex offender registration, and often court orders restricting internet use.

Does the Protection of Children Act cover AI images?
Yes. The Protection of Children Act 1978 (UK) explicitly covers “pseudo-photographs” of children. A pseudo-photograph is basically an image that looks like a photo of a child but isn’t (e.g., generated by AI or photoshop). The law says such images shall be treated as showing a child even if they’re artificial. So all the offences in that Act (like making, distributing indecent images) apply fully to AI-created images of children.
What should I do if I’m accused of creating AI indecent images?
Seek specialist legal advice immediately. Do not attempt to handle it alone or discuss the case with anyone except your lawyer. A solicitor experienced in indecent images and cybercrime defence can guide you on cooperating with police safely (e.g., during interviews), ensure your rights are protected, and start building a defence or mitigation. Early legal intervention is crucial: a lawyer might engage with investigators pre-charge, possibly influencing whether charges are brought at all. They’ll review the evidence (including forensic computer data) and consider any defences (like lack of intent or knowledge). Remember: being proactive with a skilled defence team often makes a significant difference in outcomes.
How are deepfakes different from other AI-generated images in law?
In legal terms, the main difference comes down to context and consent. AI-generated images of children (CSAM) are illegal outright, no matter what. Deepfakes of adults center on consent – if you make or share a sexual deepfake of someone without their consent, it’s illegal (thanks to new and existing laws). If you had the person’s consent (for example, a consensual deepfake in a safe role-play scenario, or a parody they agreed to), the offence wouldn’t apply. Another difference: deepfakes often implicate a specific victim (the person whose likeness is used), so offences focus on protecting that person’s dignity. AI-generated child images are more abstract in that the “victim” is broader (children in general and society). But in both cases, the law treats the end result – an indecent image – as criminal. The big picture is not about the tech, but about protecting people from sexual exploitation.

Olliers Solicitors – specialist AI indecent image and intimate image solicitors  

The emergence of AI in the indecent image arena is a major challenge for the law, but as we’ve seen, UK law is rapidly adapting with a clear message: AI is no sanctuary for illegal content. For individuals, the takeaway is simple: don’t assume “no real victim” means no crime – it does not. All the traditional offences (and penalties) around indecent images apply to AI creations too, and new offences are expanding the net further.  

If you’re reading this out of concern over something you have done or been accused of, remember that specialist help is available. Olliers’ indecent images defence team has extensive experience with these sensitive cases, including those involving cutting-edge technology. We approach every case confidentially, non-judgmentally, and proactively – whether it’s engaging early with the authorities or robustly defending you in court. 

If you are facing an investigation involving AI-generated images (child or adult), early, expert legal advice is essential. Our team can provide clarity on the law, guide you on your options, and fight to protect your rights and your future. 

For a confidential consultation, call our Indecent Images & Sexual Offences team on 0161 834 1515 (Manchester) or 020 3883 6790 (London), complete the web enquiry form below or email info@olliers.com. 

If you would like to contact Olliers Solicitors please complete the form below

Contact Us 2025
Where possible we prefer to discuss recommendations with you over the phone, will this be possible?
What is the best time to call?
Are there any police bail dates, court dates, interviews or other deadlines that you are aware of?
Do you have any legal professionals already instructed?