Regulating AI in the UK

Written 14th August 2025 by James Claughton

Artificial intelligence is no longer just a futuristic concept. It’s part of our daily lives. From detecting fraud to unlocking our phones with a glance, AI is everywhere. But as it becomes more embedded in society, the need to regulate it both ethically and legally becomes more urgent.

In 2025, the EU introduced specific AI legislation. The UK, however, is taking a different route by adopting a more flexible, sector-by-sector approach.

AI in relation to Criminal law

Criminal law is built on two key ideas: actus reus, the guilty act, and mens rea, the guilty mind. Both assume a human is behind the crime. However, it is unclear what happens when an AI system causes harm without direct human involvement.

The UK has started to address this. The Economic Crime and Corporate Transparency Act 2023 introduced a new offence called ‘failure to prevent fraud’ which is due to come into force soon. This means companies will be held responsible for not stopping fraud, even if it’s carried out by or through AI. It’s a step toward holding organisations accountable in an AI-driven world.

Different Approaches taken by EU and UK

The EU’s AI Act, coming into force in August 2025, is the first of its kind. It classifies AI systems by risk. Prohibited systems include things like biometric surveillance or social scoring. High-risk AI, used in areas like healthcare, policing, and hiring, is subject to strict rules. Lower-risk AI faces fewer requirements but still needs to meet basic standards. The EU’s approach is cautious and prioritises public safety and ethics over rapid innovation. It even includes criminal penalties for serious breaches.

The UK, on the other hand, is taking a lighter-touch approach and is enacting new provisions to deal with AI. The Data (Use and Access) Act 2025, passed in June, and is due to come into force imminently focuses on how data is shared and protected. It sets the stage for AI regulation but doesn’t directly address AI-specific risks. A broader AI Bill is in development, but it’s still in progress. The UK is relying more on flexibility and industry self-regulation, though critics worry this could leave dangerous gaps, especially for high-risk AI.

UK Laws/Laws due to come into force and the impact on AI

Although there isn’t a specific AI law, several UK acts already in force/due to come into force soon shape how AI is used. The Online Safety Act 2023 came into force on March 2025 and regulates AI in content moderation, targeting misinformation, deepfakes, and harmful content. The Data (Use and Access) Act 2025 which received Royal Assent in June 2025 and is being implemented in stages, sets rules for data sharing and digital identity, and updates privacy protections. The Economic Crime and Corporate Transparency Act 2023, ‘Failure to Prevent Fraud’ holds companies accountable for AI-enabled fraud comes into force imminently on 1 September 2025. Together, these laws will form a patchwork of regulation, but without a unified framework, significant gaps remain.

The JUSTICE Framework

The legal reform group JUSTICE has proposed a framework for using AI in the criminal justice system, built on three principles. First, AI should improve access to justice, not just efficiency. Secondly, risks like bias, lack of transparency, and accountability must be addressed. Thirdly, AI decisions should be explainable and open to review. This framework has already influenced the 2025 Sentencing Review, which is pushing for fairness in AI-assisted sentencing.

The UK AI Bill

The Artificial Intelligence (Regulation) Bill was reintroduced in March 2025. It aims to create an AI Authority, define ethical principles, require risk assessments for high-risk AI, and promote transparency and public engagement. However, it is a Private Member’s Bill, which means it does not have government backing. Without stronger political support, it may not pass, potentially leaving the UK behind in managing AI’s ethical and societal impacts.

Risks of AI

As AI evolves, so do the risks. Some of the biggest concerns include AI-powered financial crime, such as personalised scams and automated money laundering. There is also the threat of weaponised autonomous vehicles, like self-driving drones or cars used in attacks. Furthermore, synthetic identities AI-generated personas could be used for crimes such as fraud, manipulation, or espionage.

These challenges raise tough questions such as should AI systems have legal rights or responsibilities, and can developers be held liable for unintended harm and how to ensure fairness in AI-driven justice?

The future

AI is reshaping the legal landscape with different approaches being taken. The EU is taking a cautious, safety-first approach, whereas the UK is prioritising innovation. However, without a clear, unified legal framework, the UK could fall short, especially when it comes to high-risk AI.

The path forward requires collaboration between lawyers, technologists, policymakers, and the public. The UK has a real opportunity to lead by creating a regulatory model that is ethical, forward-looking, and built on public trust.

If you would like to discuss how Olliers can proactively assist you in relation to a criminal allegation, please contact our new enquiry team either by email at info@olliers.com, or by telephone at 020 3883 6790 (London) or 0161 834 1515 (Manchester) or by completing the form below and our new enquiry team will contact you

James Claughton

Solicitor

Manchester

Head Office

London

Satellite Office

If you would like to contact Olliers Solicitors please complete the form below

Contact Us 2025
Where possible we prefer to discuss recommendations with you over the phone, will this be possible?
What is the best time to call?
Are there any police bail dates, court dates, interviews or other deadlines that you are aware of?
Do you have any legal professionals already instructed?