Voice Recognition Fraud Offences

Written 1st August 2025 by Ruth Peters

As technology continues to evolve, so do the methods used by fraudsters. One of the more concerning new offences emerging in recent years is voice recognition fraud – a sophisticated and potentially devastating tool for scammers.   

At Olliers Solicitors we are increasingly aware of the need to advise clients facing allegations involving digital fraud.  

What is Voice Recognition Fraud?  

Voice recognition fraud refers to scams where a person’s voice is used (or mimicked) to gain access to personal, financial or secure information.   

This can occur in a number of ways:  

  • Voice Cloning: Using AI-driven technology, scammers can now replicate a person’s voice using short samples from phone calls, voicemails or even social media videos. These clones can be used to impersonate someone in calls to banks, employers or family members. 
  • Trigger Phrase Theft: Some banks and service providers use voice biometrics (e.g. “My voice is my password”). Scammers may attempt to capture these voice prints through phishing calls or by tricking victims into saying certain words that can later be pieced together to pass security checks.  

How Are These Scams Carried Out?  

Voice recognition scams are often part of larger phishing or social engineering schemes, where the fraudster first builds a profile of the victim using publicly available information. Once a voice sample is obtained, sometimes in as little as a few seconds, it can be fed into software to replicate speech patterns convincingly.  

Scams may include:  

  • Impersonation of family members requesting emergency funds  
  • Faked bank or service provider calls using the victim’s voice to bypass authentication  
  • Deepfake voicemails used to support fraudulent requests or instructions  

Voice Recognition Fraud Case Example   

In May 2024, Arup, a British multinational engineering firm headquartered in Hong Kong, was targeted by a sophisticated AI-driven scam. A staff member was tricked into transferring HK$200 million (approximately £20 million) after interacting with a highly convincing deepfake of the company’s Chief Financial Officer, which combined synthetic video and audio. While not the first case of AI-enabled fraud, this incident stands out as one of the most financially damaging to date, underscoring the growing risks posed by generative AI and deepfake technologies in cybercrime.  

Current Legislation relating to Voice recognition Fraud  

While there is no single piece of legislation specifically targeting voice recognition fraud, several existing laws can be used to prosecute such offences:  

  1. Fraud Act 2006 

This is the primary legislation used to address fraudulent activity in the UK.  

Voice recognition fraud typically falls under:  

  • Section 2: Fraud by false representation.  
  • Section 6: Possession of articles for use in frauds.  
  1. Computer Misuse Act 1990 

This act criminalises unauthorised access to computer systems, which can include using voice-based authentication to gain illicit access.  

  1. Data Protection Act 2018

The misuse of biometric data, including voiceprints could constitute a breach of data protection laws, especially if the data is obtained or processed without consent.  

  1. Investigatory Powers Act 2016 

This act governs the interception of communications and could be relevant in cases where voice data is unlawfully intercepted or recorded.  

Emerging Threats and the Role of AI  

The increasing sophistication of AI tools has made it easier for criminals to generate realistic synthetic voices.  

According to the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS), the UK is currently ill-equipped to handle the scale and complexity of AI-enabled crime. 

Ardi Janjeva, senior research associate at the Alan Turing Institute and an author of the report, said:  

“As AI tools continue to advance, criminals and fraudsters will exploit them, challenging law enforcement and making it even more difficult for potential victims to distinguish between what’s real and what’s fake. It’s crucial that agencies fighting crime develop effective ways to mitigate this including combatting AI with AI.” 

The Proposed AI Crime Taskforce  

One of the key recommendations is the creation of a dedicated AI Crime Taskforce within the NCA’s National Cyber Crime Unit (NCCU). 

Key Objectives of the Taskforce (as proposed):  

  • Monitor and investigate AI-enabled fraud, including voice cloning and deepfake scams.  
  • Develop AI tools to detect and counter synthetic media.  
  • Collaborate with tech companies to improve biometric security standards.  
  • Train law enforcement in AI literacy and digital forensics.  

The NCA has stated it is “closely examining” these recommendations and is already exploring the use of AI to empower crime fighters, led by Alex Murray, the UK’s first national lead for policing AI. 

Alex Murray 

Speaking about his new role within the NPCC, and the importance of technological advancements in policing he said: 

“Artificial Intelligence is here, and it is developing fast.  

The police is at the forefront of protecting communities by bringing justice and preventing crime in the first place. Our remit is huge, from tackling online child abuse, organised crime through to preventing burglary and reassuring the public.  In all these areas AI can make us more effective – it can be a tool for good. 

In addition, it can allow policing, which is ultimately paid for by the public, to be more efficient and productive.” 

He added: “I understand there are many fears and misconceptions around AI and my role is to make sure policing, our partners and our communities are well informed and kept up to date about the systems and digital technologies we are investing in.  

“Trust and confidence are central to successful policing so it’s important we are open and transparent about what we are piloting and testing and as well as being honest about how AI works.  Policing has a covenant for the use of AI that places transparency and fairness at the heart of AI development. 

We will continue to work closely with partners, including the Office of the Police Chief Scientific Adviser, and take advice from academia, industry and remain receptive and open to criticism. 

Policing is in a challenging period and AI presents opportunities for forces to test new ideas, be creative and seek innovative solutions to help boost productivity and be more effective in tackling crime. 

The police can choose to ignore developments and be left behind or embrace innovation and better protect the public. UK Policing is choosing to lead in this area.” 

What’s Next for AI Fraud?  

With AI-driven fraud becoming more advanced, we expect to see more complex prosecutions involving voice cloning and impersonation. Individuals may find themselves wrongly accused due to manipulated audio evidence, or alternatively, under investigation for allegedly using these tools in fraudulent activity.  

The UK government is expected to introduce more targeted legislation in the coming years, potentially including:  

  • AI-Specific Criminal Offences: Laws that directly address the malicious use of AI, including synthetic voice fraud.  
  • Biometric Security Standards: Regulatory requirements for companies using voice authentication.  
  • Mandatory Reporting: Obligations for financial institutions and tech firms to report suspected AI-enabled fraud.  

Facing Voice Recognition Fraud Allegations? Speak to Our Fraud Defence Experts  

At Olliers, we have experience in advising clients involved in digital and fraud-related investigations. If you or someone you know is under investigation for a fraud-related offence, including alleged involvement with voice recognition or AI-based scams, contact Olliers Solicitors for expert legal advice. We have extensive experience in dealing with complex and high-profile fraud cases, and we understand the nuances of digital evidence and evolving technology. Where AI or voice technology is involved, we understand the need for expert analysis, digital forensics and a robust defence strategy that reflects the complex technical nature of the evidence. 

We have offices in both London and Manchester and our specialist team of lawyers can advise and represent you in relation to your case. If you would like to discuss how we can proactively assist you in relation to your case, contact us by telephone on 0161 834 1515, by email to info@olliers.com or complete the form below and we will contact you. 

Ruth Peters

Business Development Director

Manchester

Head Office

London

Satellite Office

If you would like to contact Olliers Solicitors please complete the form below

Contact Us 2025
Where possible we prefer to discuss recommendations with you over the phone, will this be possible?
What is the best time to call?
Are there any police bail dates, court dates, interviews or other deadlines that you are aware of?
Do you have any legal professionals already instructed?