If you build or deploy AI systems for European markets, expect this standard to show up in customer due diligence, RFP language, and "map your controls" conversations.
ETSI published ETSI EN 304 223 V2.1.1, "Baseline Cyber Security Requirements for AI Models and Systems", a European Standard for AI cybersecurity.
Highlights:
- The standard sets a lifecycle security baseline across five phases: design, development, deployment, maintenance, and end of life.
- It defines 13 high-level principles that are easy to map into engineering, governance, and operational controls.
- It makes documentation and auditability core requirements, including traceability for models, data, prompts, and configuration changes.
- It treats model exposure as an attack surface and calls out API abuse mitigations such as access controls and rate limiting.
- It requires ongoing monitoring for AI-specific failure modes, including behavioral drift and indicators of data poisoning.
My take:
- For AI vendors selling into Europe, it is worth starting to align their controls with ETSI EN 304 223 now.
- AI security vendors should publish mappings showing how their tooling helps teams meet these requirements.
- The standard is intentionally high level. It doesn’t specify metrics, thresholds, or minimum testing depth, so, as with ISO/IEC standards, teams must translate it into measurable controls, acceptance criteria, and checklists.
- Secure development is the focus: 5 of 13 principles, and a strong push for audit-ready evidence.
- AI security and AI safety are converging. See my earlier post on the Cisco AI Cybersecurity Framework. ETSI’s planned TR 104 159 for generative AI extends the focus to deepfakes, misinformation and disinformation, confidentiality risks, and copyright and IPR concerns.
ETSI EN 304 223 V2.1.1 Baseline Cyber Security Requirements for AI Models and Systems