A must-read if you’re deploying AI agents and rightly worried about indirect prompt injection attacks.

DRIFT — Dynamic Rule-Based Defense with Injection Isolation for securing LLM agents by Hao Li (Washington University in St. Louis) continuing the NeurIPS 2025 best papers series.

Two types of prompt injection protections:

DRIFT is a system-level protection that dynamically generates policies from the user query and updates them as the agent encounters new information. It includes:

  1. Secure Planner → builds a minimal, safe tool trajectory and parameter schema
  2. Dynamic Validator → approves deviations using intent alignment and Read/Write/Execute privileges
  3. Injection Isolator → scrubs malicious instructions from tool outputs before they enter memory

The result? An attack success rate (ASR) reduction from 30.7% → 1.3% on a native agent without other system-level protections.

My takeaways:

The AI agent security space is emerging rapidly. I’d love to learn what you’re building or using today—and what’s working (or not).

The full paper

Another great paper on contextual security from my Google colleagues