Insights from Check Point AI Threat Landscape Digest

Check Point's AI Threat Landscape Digest for January-February 2026 covers more than VoidLink. I wrote about the VoidLink case separately: 88,000 lines of malware in one week.

Highlights:

1. CLAUDE.md is becoming a new mechanism for sharing jailbreaks.

Public jailbreak prompts are gone, dedicated subreddits have been banned, and accounts get terminated. Instead, a CLAUDE.md override is being shared on cybercrime forums: drop it in a directory, run Claude Code, and the agent follows the malicious instructions. Screenshots confirm successful Remote Access Trojan (RAT) generation.

2. RAPTOR: $0.03 per exploit, no compiled tooling required.

RAPTOR is a legitimate open-source framework that transforms Claude Code into an offensive security agent through markdown skill files: static analysis, fuzzing, exploit generation. Commercial models produce compilable C code at ~$0.03 per vulnerability; local models were "often broken." Criminal forums are discussing it.

3. Self-hosted models: aspiration exceeds capability.

Frontier labs are tightening access to cyber capabilities of their models, so threat actors are seeking alternatives. Uncensored models like wizardlm-33b and openhermes-2.5-mistral are typical candidates for malware generation, but the results do not match the investment. Hardware costs $5,000-$50,000, models hallucinate a lot, and a C2 vendor concluded local deployment is "more of a burden than something productive." Commercial models remain the productive choice even for actors with malicious intent.

4. Enterprise AI leaks data at scale.

1 in every 31 enterprise GenAI prompts (3.2%) risked sensitive data leakage, impacting 90% of GenAI-adopting organizations. 16% of prompts contained potentially sensitive information. 10 GenAI tools per organization, 69 prompts per employee per month.

Forum user installing local LLM variants and prompting for malware. Source: Check Point Research.
Forum user installing local LLM variants and prompting for malware. Source: Check Point Research.
Threat actor asking about cost and feasibility of running an unrestricted local model. Source: Check Point Research.
Threat actor asking about cost and feasibility of running an unrestricted local model. Source: Check Point Research.

My take:

  1. Sensitive data flowing into AI tools at scale is real, and awareness training will not fix it. Blocking AI tools is not a path forward either. The key is making your approved AI tools convenient enough that employees do not resort to shadow AI. I wrote about what happens when they are not.
  2. Repos with AI config files are a risk. Anthropic just patched CVE-2026-33068, where a malicious .claude/settings.json in a repo silently granted full agent permissions before the trust prompt was shown. Scan AI config files in your repos, similar to what you do with AI agent skills.
  3. RAPTOR produces compilable C code at $0.03 per vulnerability using markdown skill files and API calls to a commercial AI model. Time to update your threat model.
  4. Self-hosted models underperform today, but as frontier labs push protective measures including government ID verification, demand for unrestricted models will grow. Do not assume attackers lack access to the same models you have.

Sources:

  1. AI Threat Landscape Digest January-February 2026 (Check Point Research)
  2. 88,000 lines of malware in one week (The Weather Report)
  3. Promptware is the new malware (The Weather Report)
  4. How bad is DHSChat and why? (The Weather Report)
  5. 24 AI CVEs in one week (The Weather Report)
  6. OpenAI now requires government ID verification (The Weather Report)