How Cyber Criminals Are Using AI to Target UK Businesses

AI generated image of a hacker with an evil look on his face. He is wearing a hoodie stood in front of numerous computer screens

The rapid rise of artificial intelligence (AI) brings incredible opportunities and unprecedented challenges as it empowers innovators and cybercriminals. Businesses of all sizes in the UK are grappling with this reality as AI-driven cybercrime evolves to outpace traditional cybersecurity measures. From automated phishing schemes to deepfake scams, cybercriminals exploit AI to disrupt operations, steal data, and undermine trust.

This article looks at how cyber criminals use AI to target vulnerable businesses, the consequences for all UK organisations, and actionable steps to counter these growing threats.

The Rise of AI in Cyberattacks Worldwide

The rise of AI in cyberattacks worldwide has been a significant trend in recent years. In 2024 and early 2025, there was a dramatic increase in AI-powered threats. This trend will continue and evolve throughout 2025.

Cybercriminals are leveraging AI to enhance their capabilities in several ways:

  • Advanced Phishing: AI enables the creation of highly convincing phishing attacks, making them harder to detect.
  • Malware Development: Threat actors use AI to accelerate vulnerability discovery and develop sophisticated evasion techniques for malware.
  • Deepfakes: AI-generated deepfakes are being used in more sophisticated social engineering attacks.
  • Automated Attacks: AI-driven bots have led to a 60% increase in website bot attacks by the end of 2024.

The integration of AI into cyberattacks has resulted in a significant surge in attack frequency and scale:

  • In Q3 2024, organisations faced an average of 1,876 weekly cyberattacks, a 75% increase from the same period in 2023.
  • DDoS attacks increased by 41% in 2024.
  • By the end of 2024, 9 out of 10 websites had encountered bot attacks.

As we move into 2025, new AI-powered threats are emerging:

  • AI Agents and Multi-Agent Systems: Threat actors are expected to use AI agents for surveillance, initial access brokering, and vulnerability exploitation.
  • Specialised Language Models: Cybercriminals will likely develop and use specialised language models for more targeted and effective attacks.
AI generated image depicting the concept of evil AI agents working together in a network and how cybercriminals are using ai to attack businesses.

The UK has been significantly impacted by global cybercrime trends, with 2024 seeing a substantial increase in cyberattacks and 2025 expected to bring even more challenges. Key impacts include:

  • Rising Cyber Incidents
    • Nearly half of all UK organisations reported a cyber security breach in 2024. [Burness Paul]
    • UK businesses faced approximately 7.78 million cybercrimes in 2023-24, averaging 21,315 cyberattacks per day. [Ninja One]
    • By Q1 2024, the number of cyber incidents reported to the ICO had increased by 21% compared to the same period in 2023. [Egress]
  • Financial Losses:
    • In 2024, the average cost of the most disruptive breach or attack for medium and large UK businesses was £10,830. [Ninja One]
    • From June 2022 to May 2023, cyber-enabled crimes against individuals in the UK resulted in financial losses of over £890 million, averaging £4,500 per individual.
  • Sectoral Impact:
    • Education sector: 347 cyber incidents were reported in 2023, a 55% increase from 2022. [Sharp]
    • Essential services: Southern Water experienced a significant data breach in February 2024. [Sharp]
    • Medium and large businesses were the most affected, with 45% and 58% experiencing cybercrimes in the 12 months leading to April 2024.
  • Emerging Threats:
    • AI-powered attacks: Threat actors increasingly use AI to streamline phishing attacks, manipulate content, and launch sophisticated business email compromise attacks.
    • Ransomware: Despite some successful law enforcement operations, ransomware remains a significant threat.

The UK government has responded with measures such as the Cyber Security and Resilience Bill, to be introduced to Parliament in 2025, and new laws protecting consumers from cybercriminals, focusing on smart device security.

AI generated image that represents How Cyber Criminals are Using AI. The emerging threat of ai powered attacks. An evil robot sending malicious emails.

Methods Cyber Criminals Are Using AI to Target UK Businesses

AI-Powered Phishing and Spear Phishing

AI has transformed phishing attacks into highly effective cyber threats. Hackers now employ generative AI to craft personalised emails that closely mimic legitimate communications. These emails are tailored to the recipient’s specific role, incorporating scraped public data, such as names, job titles, and recent activities, to enhance their authenticity.

In 2024, 67.4% of phishing attacks globally relied on AI tools like large language models (LLMs) and chatbots to create scalable and convincing messages. These AI-driven tactics make phishing emails harder for employees to detect and remain the most prevalent cyber threat in the UK, with 84% of businesses identifying it as a primary concern in the same year. [Egress]

AI has advanced spear phishing campaigns, targeting senior executives with highly customised emails. By analysing behavioural patterns and preferences, hackers can predict responses and craft messages that are nearly indistinguishable from legitimate communications. This level of precision significantly increases the success rate of such attacks.

Deepfake Technology and Business Scams

Deepfake technology poses a growing threat. Cybercriminals use AI-generated voices or videos to impersonate senior executives, convincing employees to authorise payments or share sensitive data. These scams, often called “CEO fraud,” have caused significant financial losses for UK businesses.

One high-profile case involved a deepfake audio call that mimicked the voice of an energy firm CEO tricking a colleague into sending funds to a Hungarian supplier. The scam cost the targeted company £220,000 before it was detected. Training employees to verify requests through independent channels is essential to mitigate this risk.

In addition to financial scams, deepfakes are being used for corporate espionage. Hackers create fake videos or audio to discredit competitors, manipulate stock prices, or extract sensitive information. These tactics can cause long-term damage to a company’s reputation and market position.

Ransomware with AI Automation

AI enhances ransomware attacks by automating file encryption and improving the malware’s ability to evade detection. UK businesses, especially SMEs, are increasingly targeted because they often lack the resources to implement advanced cybersecurity measures.

Statista said ransomware attacks caused UK businesses an estimated £3.4 million in losses in 2024. These attacks often start with phishing emails, followed by malware that encrypts critical systems, halting operations until a ransom is paid.

AI-driven ransomware can also learn from previous attacks to optimise future campaigns. For instance, by analysing which industries are most likely to pay ransoms, hackers can prioritise their targets and maximise profits.

AI generated image depicting deep fake technology and how Cyber Criminals are Using AI. A distorted face of a senior executive on a computer screen passing instructions to a confused employee.

Consequences of AI-Driven Cybercrime for UK Businesses

Financial Losses and Business Disruption

UK businesses lost an estimated £44 billion in revenue due to cyberattacks over the five years leading up to 2024. In 2023 alone, the total cost of cybercrime to UK businesses reached £30.5 billion, a 138% increase from 2019. [Beaming]

For SMEs, even a minor disruption can be catastrophic. A ransomware attack can paralyse operations for days, causing significant revenue loss and risking permanent closure. According to Made In Britain, 60% of small companies go out of business within six months of a cyber attack.

Damage to Reputation and Trust

Cyberattacks erode trust. When customer data is compromised, businesses face public backlash, loss of clients, and damage to their brand image.

Building back trust requires transparency and robust cybersecurity measures to reassure stakeholders. Companies must also comply with GDPR, which mandates timely breach notifications and data protection measures.

Increased Vulnerability and Data Protection Challenges

Nearly half of all UK organisations reported a cyber security breach in 2024, with 87% classified as ‘vulnerable’ to cyberattacks. Alarmingly, only 27% of UK organisations are using AI to strengthen their cybersecurity measures. [Microsoft]

Data breaches have become a significant concern, with up to 20.4 million people in the UK having their data compromised in 2023 cyberattacks on financial services companies. This highlights the urgent need for businesses to adopt stronger data protection measures and implement zero-trust security models.

How UK Businesses Can Protect Themselves Against AI Cyber Threats

Enhance AI-Powered Defences

  • Implement AI-driven security solutions to detect and respond to threats more quickly and effectively.
  • Utilise AI for automated threat intelligence gathering and analysis.
  • Deploy machine learning algorithms to identify real-time anomalies and potential security breaches.

Strengthen Data Protection

  • Implement robust data encryption and access controls to safeguard sensitive information.
  • Regularly back up critical data and test recovery procedures.
  • Adopt a zero-trust security model to limit data access and minimise potential damage from breaches.

Improve Employee Training

  • Conduct regular cybersecurity awareness training for all staff, focusing on AI-related threats.
  • Educate employees about advanced phishing techniques and social engineering tactics enhanced by AI.
  • Implement simulated AI-powered phishing attacks to test and improve employee vigilance.

Adopt Regulatory Compliance

  • Stay informed about the upcoming Cyber Security and Resilience Bill expected in 2025.
  • Prepare for the implementation of new AI regulations and governance frameworks.
  • Conduct regular compliance audits and risk assessments to ensure adherence to evolving standards.

Enhance Incident Response

  • Develop and regularly update an AI-specific incident response plan.
  • Conduct tabletop exercises to test and improve response capabilities against AI-driven attacks.
  • Establish clear communication protocols for reporting and managing AI-related security incidents.

Invest in AI Governance

  • Establish ethical guidelines and governance frameworks for the responsible use of AI technologies within the organisation.
  • Implement AI risk assessment procedures to identify and mitigate potential vulnerabilities.
  • Regularly review and update AI systems to ensure they remain secure and compliant with evolving regulations.

AI is reshaping the cybercrime landscape, enabling hackers to launch more sophisticated and damaging attacks. UK businesses, particularly SMEs, must act swiftly to address these emerging threats by investing in training, robust cybersecurity infrastructure, and partnerships with trusted agencies.

The growing threat of AI-driven cybercrime is a call to action for businesses to prioritise cybersecurity as a core part of their operations. Companies can safeguard their future in an increasingly digital world by taking proactive measures today.

FAQs

Picture of Ben Sefton

Ben Sefton

Ben Sefton is the co-founder of Insightful AI, specialising in strategic AI adoption, ethical frameworks, and digital transformation. With a background in forensic investigation and leadership, Ben draws on nearly two decades of experience to help businesses harness AI for innovation and efficiency.

Like the article? Spread the word.