AI CVEs are arriving faster (~400 AI related vulnerabilities in 2024) than most organizations can even list their models, tools, and vector stores. This session is about getting your arms around that chaos: defining an AI BOM that actually helps during incidents, mapping controls to NIST AI RMF and SSDF, and turning EchoLeak/Triton/MCP-style bugs into a repeatable response pattern instead of fire drills.
Using a realistic “you’ve got 24 hours to respond” scenario, this session will walk attendees through what breaks when you don’t know which AI components are running where, then build up a pragmatic AI BOM and provenance model that fits into existing CMDB/SBOM and change-management workflows. It will close by tying those mechanics into concrete governance artifacts: playbooks, supplier questionnaires, and internal checklists. Attendees will leave with a usable framework for making AI security boring, auditable, and repeatable.
You will learn:
- Build a practical AI bill-of-materials (AIBOM)
- Map AI controls into NIST frameworks
- Standardize AI CVE triage workflows