CrowdStrike reported an 89% increase in AI-enabled attacks.
AI-accelerated phishing and automated reconnaissance are the main use cases.
CrowdStrike published the "2026 Global Threat Report," detailing how adversaries use GenAI and how they attack AI.
Highlights:
- GenAI at scale for social engineering: fake personas, translated lures, and more credible recruiting and influence activity.
- AI inside tooling and malware development: actors using WormGPT models to accelerate development, plus malware that uses LLMs for reconnaissance and collection.
- AI systems targeted directly: exploitation of Langflow (CVE-2025-3248) and a malicious MCP server ("postmark-mcp") that forwarded emails to attacker-controlled addresses.
- Prompt injection in the wild: hidden prompt content embedded in phishing emails to disrupt AI-based triage.
My take:
- Threat actors using AI is not surprising. Frontier labs know it and respond with KYC and misuse detection.
- Serious operators will switch to OSS models for better control over their stack. OSS models are getting good enough.
- Cloud providers should expect an increase in GPU/TPU abuse, as actors' economics and habits favor cheap or free tokens.
- People are installing OpenClaw in corporate environments. CrowdStrike's 2027 Global Threat Report will likely be full of OC compromises.
- The 93% of businesses that said they understand AI risks "quite well" or "very well" should read CrowdStrike's report.
CrowdStrike 2026 Global Threat Report