AI Enabled Crime

Written 27th August 2025 by James Claughton

Artificial Intelligence (AI) is transforming everyday life, from how we work and communicate to how we shop and consume media. However, alongside its many benefits, AI is also being used in increasingly sophisticated ways to commit crime. As a result, more people are finding themselves caught up in allegations involving AI technologies, often in complex circumstances. This article explores the various types of AI-enabled offences which have emerged in recent years and explains how Olliers Solicitors can support individuals facing allegations in this rapidly evolving area of law. 

What Is AI Enabled Crime? 

AI-enabled crime refers to offences that are facilitated by artificial intelligence. This includes scams involving deepfake videos and cloned voices, convincing phishing attacks, and automated hacking tools, amongst others. These crimes take advantage of AI’s ability to learn patterns, generate very realistic content, and interact with people in a way that appears genuine and can trick them into providing sensitive information. AI enabled crime can be much harder to detect than traditional cybercrime and as the technology develops, as do the risks posed. 

Types of AI-Enabled Crime 

Deepfakes and Non-Consensual Explicit Content 

AI can create highly realistic fake videos and images, known as deepfakes. Deepfakes can be used to impersonate people, spread false information, and to blackmail. Deepfakes can also be used to create sexually explicit material by digitally altering real photos or videos without consent. The Crime and Policing Bill (yet to receive Royal Assent) aims to criminalise the creation and distribution of such content, with sentences of up to two years in prison. These proposals build on the Online Safety Act 2023, which already makes it illegal to share intimate images without consent. 

Voice Cloning and Financial Fraud

AI can replicate someone’s voice with just a few seconds of audio. This technology has been used in scams where fraudsters impersonate company executives or family members requesting fraudulent bank transfers which can fool victims. 

There are no specific UK laws targeting voice cloning but these offences are typically prosecuted under the Fraud Act 2006 and the Computer Misuse Act 1990

Identity Theft

AI can generate fake digital identities which appear highly authentic and can bypass security checks. These profiles are used to open bank accounts, apply for credit, and commit financial fraud. Reports indicate a 500% increase in high-risk fake identities in the UK since 2020. 

AI Phishing

AI can create phishing emails that copy corporate branding of companies and even the tone, whilst often impersonating senior staff. These are often highly convicting with junior and senior members of staff falling victim to such phishing attempts. The UK has seen a sharp rise in these attacks, with a 60% global increase reported in 2024 alone.  

Ransomware and Automated Cyberattacks

Criminal groups are using AI to automate cyberattacks, identify system vulnerabilities, and implement ransomware. These attacks can have a major impact on organisations, steal sensitive data, and demand large ransoms to stop the attack. The National Cyber Security Centre (NCSC) warns that AI will “almost certainly” increase the frequency and impact of cyber intrusions. 

Romance and Employment Scams

AI generated profiles are used to deceive victims into fake romantic relationships or job offers. These scams can result in financial loss, emotional harm, and identity theft. 

Misinformation and Online Harassment

Generative AI is being used to mass produce fake news, conspiracy theories and abusive content. These are sometimes targeted at individuals or groups to incite harassment or manipulate public opinion. The UK government has issued guidance for electoral candidates on how to respond. 

False emergency calls and SIM swap fraud

AI generated voices have been used to make false emergency calls. SIM swap fraud is utilised by criminals to hijack mobile numbers to bypass multi-factor authentication and gain access to sensitive accounts. 

How Olliers Can Help 

AI enabled crime is a quickly developing and complex area of law. As AI continues to advance, so to do the ways it can be used to commit crime. The UK government is taking steps to legislate AI but it is still at early stages. Given that this is an evolving area and one in which investigating and prosecuting authorities are on occasion likely to still be learning as they go along, it is absolutely critical to get legal advice and assistance as soon as possible. At Olliers Solicitors, we understand the legal and technological challenges these cases present and are well equipped to assist those who may face investigation or prosecution for alleged AI enabled crime. 

If you would like to discuss how Olliers can proactively assist you in relation to a criminal allegation, pleasecontactour new enquiry team either by email atinfo@olliers.com, or by telephone at 020 3883 6790(London) or 0161 834 1515 (Manchester) or by completing the form below and our new enquiry team will contact you.

James Claughton

Solicitor

Manchester

Head Office

London

Satellite Office

If you would like to contact Olliers Solicitors please complete the form below

Contact Us 2025
Where possible we prefer to discuss recommendations with you over the phone, will this be possible?
What is the best time to call?
Are there any police bail dates, court dates, interviews or other deadlines that you are aware of?
Do you have any legal professionals already instructed?