If you build or deploy AI systems for European markets, expect this standard to show up in customer due diligence, RFP language, and "map your controls" conversations.

ETSI published ETSI EN 304 223 V2.1.1, "Baseline Cyber Security Requirements for AI Models and Systems", a European Standard for AI cybersecurity.

Highlights:

My take:

  1. For AI vendors selling into Europe, it is worth starting to align their controls with ETSI EN 304 223 now.
  2. AI security vendors should publish mappings showing how their tooling helps teams meet these requirements.
  3. The standard is intentionally high level. It doesn’t specify metrics, thresholds, or minimum testing depth, so, as with ISO/IEC standards, teams must translate it into measurable controls, acceptance criteria, and checklists.
  4. Secure development is the focus: 5 of 13 principles, and a strong push for audit-ready evidence.
  5. AI security and AI safety are converging. See my earlier post on the Cisco AI Cybersecurity Framework. ETSI’s planned TR 104 159 for generative AI extends the focus to deepfakes, misinformation and disinformation, confidentiality risks, and copyright and IPR concerns.

ETSI EN 304 223 V2.1.1 Baseline Cyber Security Requirements for AI Models and Systems