24 AI CVEs in one week, one exploited in 20 hours
An advisory for a critical Langflow vulnerability was published on Tuesday evening. By Wednesday afternoon, with no public proof-of-concept, attackers had built working exploits from the advisory text alone and were harvesting OpenAI, Anthropic, and AWS API keys from compromised AI pipelines. CISA added it to the Known Exploited Vulnerabilities catalog on March 25.
That was one of 24 AI-related CVEs disclosed or actively exploited between March 19 and 26. Four critical, eleven high severity, two with no patch. I picked the five that matter most, the trends connecting them, and what to do about it.
1. Langflow: unauthenticated RCE, exploited in 20 hours (CVE-2026-33017, CVSS 9.3).
Langflow is a visual framework for building AI agent pipelines with 146,000+ GitHub stars. Its public flows endpoint, designed to let unauthenticated users interact with deployed chatbots, accepted an optional data parameter containing flow definitions. When supplied, the server used the attacker's definition instead of the stored one, passing it through 10 function calls that ended at exec(compiled_code). A single HTTP POST with malicious Python in the JSON payload achieved immediate code execution. No sandbox. No auth. No restrictions on imported modules.
The advisory was published on March 17 at 20:05 UTC. By March 18 at 16:04, Sysdig's Threat Research Team observed the first exploitation. No public proof-of-concept existed. Attackers built working exploits directly from the advisory text, deployed a private nuclei template at scale, and moved through three phases: automated scanning, custom exploit scripts with stage-2 delivery, and credential harvesting. They extracted OpenAI, Anthropic, and AWS API keys from .env files. Two actors exfiltrated data to a shared C2 server, suggesting a single operator working through multiple proxies.
CISA added CVE-2026-33017 to the KEV catalog on March 25 with an April 8 remediation deadline. The fix (commit 73b6612) removed the data parameter entirely because adding authentication would have broken the public flows feature.
This is the second Langflow exec() RCE in a year. CVE-2025-3248, a similar flaw on a different endpoint, remains under active exploitation per CISA.
2. AnythingLLM: prompt injection to full RCE via Electron misconfiguration (CVE-2026-32626, CVSS 9.6).
AnythingLLM Desktop renders LLM output through a custom markdown-it image renderer that interpolates token.content into the alt attribute without HTML entity escaping. The PromptReply component then renders output via dangerouslySetInnerHTML without DOMPurify sanitization. Combined with insecure Electron configuration (nodeIntegration: true, contextIsolation: false), this escalates from XSS to full host-level code execution. Attack vectors include poisoned RAG documents and indirect prompt injection. No user interaction beyond normal chat. Fixed in 1.11.2.
This is a textbook chain: prompt injection delivers the payload, missing sanitization renders it, and Electron misconfiguration escalates it from browser sandbox to OS-level access.
3. ONNX Hub: silent model loading from untrusted sources, no patch (CVE-2026-28500, CVSS 9.1).
The verification logic checks if not _verify_repo_ref(repo) and not silent. When silent=True, the entire trust pathway is skipped. The bug is semantic: the flag was meant to suppress prompts, but it's wired into the security check itself. When you can't ask the user, the correct behavior is to fail closed. Instead, "don't show output" became "skip security." The SHA-256 check doesn't help either: the hash manifest lives in the same repo as the model, so an attacker who controls the repo controls both sides of the verification. Advisory published March 16.
Who passes silent=True? CI/CD pipelines and automated ML workflows that can't tolerate interactive prompts, which is exactly where supply chain attacks do the most damage: unattended, no human in the loop. Severity is contested: GitHub rates it Moderate (CVSS 4.8), NVD rates it Critical (9.1). The gap reflects different assumptions about how commonly silent=True appears in production.
4. Spring AI: RAG filter injection breaks tenant isolation (CVE-2026-22729, CVE-2026-22730, CVSS 8.6/8.8).
Two injection flaws in Spring AI's filter expression converter, one via JSONPath and one via SQL, enable cross-tenant document access in vector store and RAG deployments. Any multi-tenant app built on spring-ai-vector-store or spring-ai-mariadb-store was vulnerable. Fixed in 1.0.4 / 1.1.3.
Spring AI is maintained by Broadcom's Spring team, the same people who taught Java developers to parameterize SQL. Their filter converter concatenates strings instead. The developer who wrote it was building a serializer, not a query, at least not in their mental model. The filter values come from application code, not a form field. The pattern that triggers "I need parameterization" never fired, even though the output goes straight into a database query. Graphiti's Cypher injection (CVE-2026-32247), disclosed the same month, is the same mistake in a graph database.
5. Claude Code: workspace trust dialog bypass (CVE-2026-33068, CVSS 7.7).
A malicious repo containing .claude/settings.json with "defaultMode": "bypassPermissions" would have its settings loaded before the trust prompt was displayed, silently granting full agent permissions. Config was parsed before consent was collected. Same class of flaw that plagued VS Code workspace settings years ago. Fixed in Claude Code 2.1.53.
The CVSS is 7.7 because it assumes friction: the victim has to clone a repo and run Claude Code in it. In practice, AI coding assistants are the default workflow. Developers clone repos constantly and the first thing they do is ask the AI to explain the codebase. The trust dialog is the one moment a developer might pause before handing an unfamiliar repo full agent access. This CVE removed that moment. A repo with prompt injection in CLAUDE.md or code comments is enough once permissions are bypassed.
My take:
1. Time to exploit is now measured in hours.
Langflow was weaponized in 20 hours from advisory text alone, no PoC needed. Your threat model must assume that any vulnerability and misconfiguration will be exploited, fast.
2. The AI stack is re-learning security lessons the web stack learned 15 years ago.
Not a single novel attack technique this week. The entire OWASP top 10, replayed in AI tooling.
I think we're making the same mistakes because the new surfaces don't look like the old bad patterns, but they are. Developers learned "parameterize your SQL," not "parameterize any string that becomes a query in any language." Every new surface added or accelerated by AI, re-sets the learning cycle.
3. LLM output is the new untrusted input, but we don't treat it that way yet.
AnythingLLM, Discourse, and SQLBot all consider LLM output as trusted data from backend and render or store it without sanitization. The mental model ("my LLM, my output") is intuitive, but wrong.
4. Open-source AI projects are go-to-market tools, not products.
I use open-source daily, but we need to be realistic about the trust we extend. Aqua Security brought Trivy under its wing as a demand generation tool for its commercial scanner. IBM acquired DataStax for its AI data platform, and Langflow came along as part of the package.
Sources:
- How attackers compromised Langflow AI pipelines in 20 hours (Sysdig TRT)
- Langflow exec() RCE analysis (Barrack AI)
- CVE-2026-33017 (NVD)
- CISA Known Exploited Vulnerabilities Catalog
- CVE-2026-32626 - AnythingLLM (NVD)
- CVE-2026-28500 - ONNX Hub (NVD)
- CVE-2026-22729 - Spring AI (NVD)
- CVE-2026-33068 - Claude Code (GitHub Advisory)