AI-Powered Cyberattacks and Data Theft: How Attackers Use AI and How to Fight Back
In January 2024, Arup — the multinational engineering firm — lost $25 million after an employee attended a video call where every other participant was a deepfake. The "CFO" and "colleagues" were AI-generated in real time, convincing enough to authorise a transfer of HK$200 million. This is not a hypothetical future threat. AI-powered social engineering, automated vulnerability exploitation, and polymorphic malware that rewrites itself to evade detection are already being deployed at scale by cybercriminal groups.
Arup lost $25M to a single deepfake video call where every participant was AI-generated.
AI-Generated Phishing at Scale
Large language models have eliminated the two signals that traditionally identified phishing emails: poor grammar and generic messaging. AI-generated phishing emails are grammatically perfect, contextually relevant, and personalised using publicly available data from LinkedIn, company websites, and social media. IBM X-Force's 2024 research demonstrated that AI-generated phishing emails achieved a 47% click-through rate compared to 12% for human-crafted equivalents. More critically, AI enables attackers to generate thousands of unique, personalised phishing emails per hour — each one different enough to bypass email security tools that rely on pattern matching or known signatures.
Automated Reconnaissance and Vulnerability Discovery
AI agents can now autonomously scan organisations' external attack surfaces, identify potential vulnerabilities, and craft targeted exploits — tasks that previously required skilled human operators and days of manual work. Research from the University of Illinois demonstrated in 2024 that GPT-4 could autonomously exploit 87% of real-world CVEs when provided with the vulnerability description. Tools like PentestGPT automate the penetration testing workflow. While these tools were created for defensive purposes, the same capabilities are available to attackers who operate without ethical constraints. The reconnaissance phase of cyberattacks — traditionally the most time-consuming — is being compressed from weeks to hours.
Polymorphic Malware and Evasion
Traditional antivirus and endpoint detection tools rely on signatures — known patterns of malicious code. AI-powered polymorphic malware rewrites its own code on each execution, producing functionally identical but syntactically unique variants that no signature database has seen before. Research from Hyas InfoSec demonstrated BlackMamba, a proof-of-concept keylogger that uses LLM APIs to dynamically rewrite its payload at runtime, producing a new variant with each execution. The malware evaded every major endpoint detection platform during testing. When the malware itself can think, static detection models become fundamentally inadequate.
BlackFog's AI vs AI Approach
BlackFog counters AI-powered threats by focusing on the outcome rather than the method. Regardless of whether the phishing email was crafted by a human or an LLM, regardless of whether the malware uses static code or polymorphic generation, the attacker still needs to exfiltrate data from the compromised device. BlackFog's behavioural AI analyses data egress patterns in real time, identifying and blocking exfiltration attempts based on the behaviour of the data transfer — destination risk scoring, transfer volume anomalies, protocol analysis, and temporal patterns — rather than matching against known attack signatures. This approach is inherently resilient to AI-powered attacks because it does not depend on recognising the attack tool.
Preparing for the AI Threat Landscape
Organisations cannot wait for AI-powered attacks to become more common before adapting their defences. Proactive preparation requires:
- Assume phishing will succeed: layer controls so that a successful phishing email does not automatically lead to data exfiltration
- Move beyond signature-based detection: adopt behavioural analysis tools that detect anomalous actions rather than known patterns
- Implement Anti Data Exfiltration: ensure that even if an AI-powered attack achieves initial access, it cannot extract data from the organisation
- Train employees on deepfake risks: establish out-of-band verification procedures for financial authorisations, especially video and voice calls
- Monitor AI tool usage: prevent employees from inadvertently exposing sensitive data through corporate use of AI chatbots and coding assistants
- Conduct AI-themed tabletop exercises: test incident response procedures against AI-powered attack scenarios
Counter AI-powered threats with BlackFog
Kyanite Blue is an authorised BlackFog partner. We deploy, manage, and support ADX for organisations across every sector.
Get in touchReady to stop data exfiltration?
Start with a free 30-day BlackFog assessment — 25 devices, no obligation.