You need both.
- Google’s SAIF is your Governance and Implementation Guide on how to build a security program, architect secure infrastructure, and apply specific controls, like identity and input filtering.
- Cisco’s AI Security Framework is your Threat Taxonomy and Risk Atlas, detailing exactly which security and safety threats to defend against.
Here’s an oversimplified 4-step playbook to use them together (Friday edition):
- Build the Foundation (Google SAIF). Establish AI Governance Controls and an Acceptable Use Policy, enforced by an AI platform across the entire lifecycle from training to deployment. Now you have a model and agent inventory, a secure vault for artifacts, and enforcement rails.
- Prioritize Protecting From The Top 3 Techniques (Cisco):
- Goal Hijacking, specifically Indirect Prompt Injections (AITech-1.2). Attackers hide instructions in trusted sources like emails or documents to manipulate AI into abandoning its primary directive.
- Data Exfiltration / Exposure (AITech-8.2). Prioritize exfiltration via tool misuse and exploitation. Attackers coerce AI into using connected tools like Slack or Gmail to send internal data externally. Pay attention to MCP gateways.
- Dependency / Plugin Compromise (AITech-9.3). Third-party libraries play a critical role in AI systems, making them an important attack vector. Attackers publish poisoned packages (e.g., on npm) that coding agents auto-install, creating hidden backdoors to steal SSH keys and API tokens.
The OWASP folks will reasonably ask "what about identity?" So, add Unauthorized Access (AITech-14.1) to your list.
- Deploy Technical Controls (Google SAIF). Map defenses directly to the prioritized vectors.
- Deploy an "LLM Firewall" to sanitize model inputs and outputs for malicious payloads. There are plenty of options on the market to choose from.
- Enforce "Human-in-the-Loop" approval for sensitive actions and look for a contextual policy solution.
- Basic dependency hygiene by checking for typosquats and provenance, and keeping prompts under version control are a good start. A dependency scanner can level up your protections.
- Red Team & Validate (Cisco). Controls are theoretical until tested. Get a third party’s help from an AI-native player to stress-test your AI system. The findings will help prioritize next steps.
Your cyber insurance provider will also ask you questions soon regarding how AI is governed and how AI decisions are made.
Great news: there’s a growing ecosystem of AI-native cybersecurity companies that aim to address emerging risks. See my earlier post on AI for application security.