Threat Intelligence 6 min read

AI in Cybersecurity: Separating the Hype from What Actually Works

Max, Technical Director·6 April 2026

The AI Washing Problem

After generative AI exploded into public consciousness in 2023, every cybersecurity vendor rushed to add "AI" to their product descriptions. Gartner's 2025 Hype Cycle for Security Operations placed "AI-Augmented Security" firmly in the Trough of Disillusionment — the phase where reality fails to meet inflated expectations. The problem is not that AI has no role in cybersecurity. It does, and in specific areas it is genuinely transformative. The problem is that "AI-powered" has become a meaningless marketing label applied to everything from sophisticated machine learning models trained on billions of security events to products that added a ChatGPT wrapper to their documentation search. Security leaders need to distinguish between vendors where AI is a core architectural component and vendors where AI is a marketing adjective.

Where AI Genuinely Works in Security

AI delivers real value in cybersecurity when applied to problems that involve pattern recognition at scale, anomaly detection across vast datasets, and classification of threats faster than human analysts can process them. Behavioural analysis in endpoint detection works — Coro's AI engine analyses endpoint behaviour patterns to detect threats that signature-based detection misses, identifying anomalous process execution and lateral movement in real time. Data flow analysis for exfiltration prevention works — BlackFog's ADX AI engine classifies and evaluates every outbound data flow, distinguishing between legitimate business traffic and unauthorised exfiltration attempts across protocols and destinations. Email threat detection works — AI models trained on millions of email patterns can identify phishing attempts with significantly higher accuracy than rule-based filters, particularly for novel attacks.

  • Behavioural endpoint analysis (Coro): detects threats from behaviour, not signatures
  • Data flow classification (BlackFog ADX AI): blocks exfiltration in real time
  • Email threat detection: identifies novel phishing beyond rule-based filters
  • Threat intelligence correlation: connects signals across billions of events
  • Vulnerability prioritisation: predicts which CVEs will actually be exploited

Where AI Is Overhyped

AI is not a replacement for security architecture, skilled analysts, or proper configuration. A large language model cannot fix a misconfigured firewall, remediate an unpatched server, or implement network segmentation. "AI-powered" threat detection that has not been trained on relevant data produces more noise, not less. Several prominent vendors have launched "AI security copilots" that essentially wrap existing dashboards in a chatbot interface — useful for documentation queries, but not a genuine advancement in security capability. The most dangerous form of AI hype is the implication that AI removes the need for human expertise. In reality, AI amplifies the capability of skilled security teams. Without the team, AI amplifies nothing. Organisations replacing security headcount with AI tools will discover this when an incident requires judgement, not just pattern matching.

How to Evaluate AI Claims from Security Vendors

When a vendor claims AI capabilities, ask five questions. First, what specific problem does the AI solve — and can they demonstrate measurable improvement over non-AI approaches? Second, what data was the model trained on, and is it relevant to your environment? Third, what happens when the AI is wrong — what are the false positive and false negative rates? Fourth, does the AI require constant human oversight, or does it operate autonomously within defined parameters? Fifth, is AI a core architectural component or a feature added to an existing product? The vendors that answer these questions clearly and honestly are the ones where AI delivers genuine value. At Kyanite Blue, every product in our stack earns its place through demonstrated outcomes, not marketing claims. BlackFog's ADX AI and Coro's behavioural AI have proven their value through real deployments — not demo environments.

Frequently Asked Questions

Can AI replace human security analysts?

No. AI augments human analysts by handling pattern recognition and classification at scale, but it cannot replace the judgement, creativity, and contextual understanding that skilled analysts bring to security operations. The best security teams use AI to handle volume so that humans can focus on the complex, high-stakes decisions.

What is AI washing in cybersecurity?

AI washing is the practice of adding AI marketing claims to products that do not meaningfully use AI, or overstating the role of AI in a product's capabilities. It is similar to "greenwashing" in environmental claims. Look for specific, measurable AI capabilities rather than vague "AI-powered" labels.

Which security products genuinely use AI well?

Products where AI is most impactful include behavioural endpoint detection platforms (like Coro), anti data exfiltration tools (like BlackFog), email security platforms that analyse communication patterns, and threat intelligence platforms that correlate signals across billions of events. The common thread is that these products use AI for classification and pattern recognition at a scale humans cannot match.

aimachine learningcybersecurityblackfogcoroinnovation

Want to discuss this with our team?

Book a free 20-minute call with David or Max.

Book a call