MindXO Tool · AI Governance, Security & Safety Framework Navigator
The AI governance landscape, mapped to the operating model practitioners actually use.
Standards, frameworks, and regulatory instruments are scattered across dozens of issuing bodies. This navigator organizes them inside a single GRC structure, from organizational objectives down to compliance evidence. Filter by role to see what matters to you. Trace how documents reference each other. See where the landscape falls short.
- 51 frameworks
- 4 GRC layers (ORG, GOV, RISK, COMP)
- 4 cross-cutting columns (General, Security, Safety, Ethics)
- 7 practitioner roles (CRO, CISO, CGRCO, CDO, CAIO, MLOps, AI Red Team)
- 6 source categories (NIST AI, ISO/IEC, Security & Threat Intel, Industry/Vendor, Regulatory, Responsible AI & Ethics)
Framework catalog
Organization (ORG)
Cross-cutting frameworks tackling governance, risk and compliance from an organizational perspective.
Outcome: AI is a risk-managed enabler of strategy.
General
- AI Risk Management Framework 1.0 (NIST, AI 100-1, Jan 2023) · Published · NIST AI · Roles: CRO, CISO, CGRCO, CDO, CIO, CAIO
Core AI governance framework with four functions: Govern, Map, Measure, Manage.
- ISO/IEC 42001:2023 AI Management System (ISO/IEC, 42001:2023, Dec 2023) · Published · ISO / IEC · Roles: CGRCO, CRO, CIO, CAIO
Certifiable AI management system standard for establishing AI governance.
Safety
- Anthropic Responsible Scaling Policy (Anthropic, v2.0, Sep 2023) · Published · Industry / Vendor · Roles: CRO, CAIO
AI Safety Levels (ASL) define capability thresholds requiring additional safeguards.
- OpenAI Preparedness Framework (OpenAI, v1.0, Dec 2023) · Published · Industry / Vendor · Roles: CRO, CAIO
Risk evaluation across cybersecurity, CBRN, persuasion, and model autonomy.
Ethics
- OECD AI Principles (OECD, 2019 (upd. 2024), May 2019) · Published · Responsible AI & Ethics · Roles: CGRCO, CDO, CAIO
Five principles referenced by 46 countries. Foundation for national AI policy.
- UNESCO Recommendation on the Ethics of AI (UNESCO, 2021, Nov 2021) · Published · Responsible AI & Ethics · Roles: CGRCO, CAIO
Global normative instrument on AI ethics adopted by 193 member states.
- IEEE 7000 Model Process for Addressing Ethical Concerns (IEEE, 2021, 2021) · Published · Responsible AI & Ethics · Roles: CGRCO, CAIO
Process model for integrating ethical considerations into system design.
- ISO/IEC TR 24368:2022 AI Ethical and Societal Concerns (ISO/IEC, TR 24368:2022, Aug 2022) · Published · ISO / IEC · Roles: CGRCO, CAIO
Overview of ethical and societal concerns related to AI systems.
Governance (GOV)
Frameworks focused on model management, oversight and accountability.
Outcome: AI systems managed within risk tolerance.
General
- ISO/IEC 5338:2023 AI Lifecycle Processes (ISO/IEC, 5338:2023, Nov 2023) · Published · ISO / IEC · Roles: CDO, CRO, CIO, MLOps
AI system lifecycle management from conception to retirement.
- NIST AI RMF Playbook (NIST, 2023, Jan 2023) · Published · NIST AI · Roles: CRO, CGRCO, CAIO
Interactive implementation companion to AI RMF with suggested actions.
- A Plan for Global Engagement on AI Standards (NIST, AI 100-5, Jul 2024) · Published · NIST AI · Roles: CGRCO, CAIO
US strategy for AI standardization leadership and international coordination.
- AI RMF Generative AI Glossary (NIST, AI 100-6, Nov 2024) · Published · NIST AI · Roles: CRO, CGRCO, CDO, CAIO
500+ terms operationalizing AI RMF vocabulary for GenAI context.
Safety
- Singapore Model AI Governance Framework for Agentic AI (IMDA Singapore, v1.0, Jan 2026) · Published · Regulatory · Roles: CGRCO, CDO, CAIO
First government governance framework specifically for agentic AI.
- Cisco Responsible AI Framework (Cisco, 2022, 2022) · Published · Industry / Vendor · Roles: CGRCO, CAIO
Six principles with committee oversight and AI impact assessment process.
- Microsoft Responsible AI Standard v2 (Microsoft, v2.0, Jun 2022) · Published · Industry / Vendor · Roles: CGRCO, CDO, CAIO
Internal standard applied across Microsoft AI products and services.
Risk (RISK)
Frameworks focused on risk management; identification, measurement, mitigations, controls and monitoring of the AI systems.
Outcome: Residual risks measured and continuously monitored.
General
- SaferAI Frontier AI Risk Management Framework (SaferAI, 2024, 2024) · Published · Industry / Vendor · Roles: CRO, CISO, CAIO
Structured risk management methodology for frontier AI systems.
- ISO/IEC 23894:2023 AI Risk Management (ISO/IEC, 23894:2023, Feb 2023) · Published · ISO / IEC · Roles: CRO, CAIO
AI risk management processes aligned with ISO 31000.
- FAIR Factor Analysis of Information Risk (AI) (FAIR Institute, Ongoing, Ongoing) · Published · Security & Threat Intel · Roles: CRO, CISO, CAIO
Quantitative risk analysis methodology increasingly applied to AI.
- CRI Financial Services AI Risk Management Framework (Cyber Risk Institute, 2025, 2025) · Published · Industry / Vendor · Roles: CRO, CGRCO, CISO
230 control objectives for financial services, aligned to NIST AI RMF.
- ISO/IEC TR 24029-1:2021 Assessment of Robustness of Neural Networks (ISO/IEC, TR 24029-1, 2021) · Published · ISO / IEC · Roles: CDO, CISO, AI Red Team, MLOps
Methods for assessing robustness of neural networks.
- Challenges to the Monitoring of Deployed AI Systems (NIST, AI 800-4, Mar 2026) · Published · NIST AI · Roles: CRO, CISO, CDO, CIO, MLOps
Six monitoring categories for post-deployment AI systems.
Security
- OWASP Top 10 for LLM Applications (OWASP, v2.0, Nov 2025) · Published · Security & Threat Intel · Roles: CISO, AI Red Team
Top 10 critical vulnerabilities in LLM-based applications.
- OWASP AI Exchange (OWASP, Ongoing, Ongoing) · Published · Security & Threat Intel · Roles: CISO, AI Red Team
Comprehensive living catalog of AI threats, vulnerabilities, and controls.
- OWASP Top 10 for Agentic Applications 2026 (OWASP, v1.0, 2026) · Published · Security & Threat Intel · Roles: CISO, AI Red Team
Top 10 security risks for autonomous and agentic AI systems.
- MITRE ATLAS (Adversarial Threat Landscape for AI Systems) (MITRE, Ongoing, Ongoing) · Published · Security & Threat Intel · Roles: CISO, AI Red Team
ATT&CK-style knowledge base of adversarial tactics for AI systems.
- ENISA Threat Landscape 2025 (ENISA, v1.2, Oct 2025) · Published · Security & Threat Intel · Roles: CISO, AI Red Team
4,875 incidents analysed; AI as defining threat element.
- Databricks AI Security Framework v3.0 (Databricks, v3.0, Mar 2026) · Published · Industry / Vendor · Roles: CISO, CDO, AI Red Team, MLOps
97 risks, 73 controls across 13 AI system components incl. agentic AI.
- Google Secure AI Framework (Google, 2023, Jun 2023) · Published · Industry / Vendor · Roles: CISO, AI Red Team
Six core elements for securing AI systems across the lifecycle.
- OWASP AI Vulnerability Scoring System (OWASP, v0.8, 2026) · Draft · Security & Threat Intel · Roles: CISO, CRO, AI Red Team
Quantifiable scoring methodology for AI vulnerabilities.
- OWASP AI Testing Guide v1 (OWASP, v1.0, Nov 2025) · Published · Security & Threat Intel · Roles: CISO, CDO, AI Red Team, MLOps
First open standard for trustworthiness testing of AI systems.
- Adversarial Machine Learning: Taxonomy and Terminology (NIST, AI 100-2e2025, Mar 2025) · Published · NIST AI · Roles: CISO, AI Red Team
Taxonomy of evasion, poisoning, privacy, and abuse attacks for AI.
- SSDF Community Profile for AI Model Development (NIST, SP 800-218A, Nov 2024) · Published · NIST AI · Roles: CISO, MLOps
Secure development lifecycle practices for GenAI model development.
- CSA AI Controls Matrix (Cloud Security Alliance, 2024, 2024) · Published · Security & Threat Intel · Roles: CISO, CGRCO, CIO
Control matrix for securing AI in cloud environments.
- ENISA Multilayer Framework for Good Cybersecurity Practices for AI (ENISA, 2023, Jun 2023) · Published · Security & Threat Intel · Roles: CISO, CGRCO
Three-layer cybersecurity framework for AI systems.
- Cisco Integrated AI Security & Safety Framework (Cisco, 2026, Jan 2026) · Published · Industry / Vendor · Roles: CISO, AI Red Team
Unifies AI security and AI safety as complementary risk dimensions.
- NVIDIA AI Safety Recipe (NVIDIA, 2025, Jul 2025) · Published · Industry / Vendor · Roles: CISO, CDO, AI Red Team, MLOps
Comprehensive framework for trustworthy agentic AI systems.
- AIUC-1: AI Agent Standard (AIUC Consortium, v1.0, 2025) · Published · Industry / Vendor · Roles: CISO, CDO, AI Red Team, MLOps
First certifiable AI agent standard covering security, safety, reliability, and accountability.
Safety
- Cisco Integrated AI Security & Safety Framework (Cisco, 2026, Jan 2026) · Published · Industry / Vendor · Roles: CISO, AI Red Team
Unifies AI security and AI safety as complementary risk dimensions.
- NVIDIA AI Safety Recipe (NVIDIA, 2025, Jul 2025) · Published · Industry / Vendor · Roles: CISO, CDO, AI Red Team, MLOps
Comprehensive framework for trustworthy agentic AI systems.
- NIST Generative AI Profile (NIST, AI 600-1, Jul 2024) · Published · NIST AI · Roles: CRO, CISO, CAIO, AI Red Team
13 GenAI-specific risks with 400+ suggested actions.
- Managing Misuse Risk for Dual-Use Foundation Models (NIST, AI 800-1, Draft 2025) · Draft · NIST AI · Roles: CRO, CISO, CAIO
Framework for managing misuse risks of dual-use foundation models.
- Reducing Risks Posed by Synthetic Content (NIST, AI 100-4, Apr 2024) · Published · NIST AI · Roles: CRO, CGRCO, CAIO
Digital watermarking, content authentication for synthetic media.
- GenAI Pilot Study: Text-to-Text Evaluation (NIST, AI 700-1, Sep 2024) · Published · NIST AI · Roles: CDO, CAIO, MLOps
Measurement methodology for evaluating GenAI text outputs.
- Practices for Automated Benchmark Evaluations of Language Models (NIST, AI 800-2, Draft Jan 2026) · Draft · NIST AI · Roles: CDO, CISO, CAIO, MLOps, AI Red Team
Best practices for evaluating LLMs via automated benchmarks.
- AIUC-1: AI Agent Standard (AIUC Consortium, v1.0, 2025) · Published · Industry / Vendor · Roles: CISO, CDO, AI Red Team, MLOps
First certifiable AI agent standard covering security, safety, reliability, and accountability.
Compliance (COMP)
Regulatory instruments and controls framework that produce compliance evidence.
Outcome: Compliance documented with audit-ready evidence.
General
- EU AI Act (European Parliament, 2024 (phased), Aug 2024) · Mandatory · Regulatory · Roles: CGRCO, CRO, CIO, CAIO
World's first comprehensive AI law. Risk-based classification.
- GPAI Code of Practice (European Commission, Jul 2025, Jul 2025) · Published · Regulatory · Roles: CGRCO, CDO, CAIO
Voluntary tool for GPAI model providers to demonstrate AI Act compliance.
- ALTAI Assessment List for Trustworthy AI (European Commission, 2020, Jul 2020) · Published · Regulatory · Roles: CGRCO, CAIO
Self-assessment checklist for AI trustworthiness.
- ISO/IEC 42001 (as compliance instrument) (ISO/IEC, 42001:2023, Dec 2023) · Published · ISO / IEC · Roles: CGRCO, CIO
Certifiable; serves as compliance evidence when audited.
Security
- ISO/IEC 27001:2022 Information Security Management (ISO/IEC, 27001:2022, Oct 2022) · Published · ISO / IEC · Roles: CISO, CGRCO, CIO
Certifiable ISMS. Baseline for AI security controls.
- NIST Cybersecurity Framework Profile for AI (NIST (CAISI), IR 8596, Draft Dec 2025) · Draft · NIST AI · Roles: CISO, AI Red Team
Maps CSF 2.0 to AI with Secure/Defend/Thwart structure.
- SP 800-53 Control Overlays for Securing AI Systems (NIST (CAISI), Expected 2026, Expected 2026) · Draft · NIST AI · Roles: CISO, CGRCO
Mapping SP 800-53 security controls to AI use cases.
- Dioptra AI Test Platform (NIST, Ongoing, Ongoing) · Published · NIST AI · Roles: CISO, CDO, AI Red Team, MLOps
Open-source platform for testing AI system resilience.
Frequently asked questions
What is an AI governance framework?
A structured set of principles, processes, and controls that guide how an organization develops, deploys, and operates AI systems. Governance frameworks define who makes decisions, how risk is managed, and how accountability is maintained. Examples include NIST AI RMF and ISO/IEC 42001.
How do NIST AI RMF and ISO 42001 differ?
NIST AI RMF is a voluntary risk management framework organized around four functions (Govern, Map, Measure, Manage). It provides guidance but is not certifiable. ISO/IEC 42001 is a certifiable management system standard that defines requirements for establishing and maintaining an AI management system. Organizations often use both: AI RMF for risk methodology and ISO 42001 for auditable governance structure.
Which AI frameworks matter for CISOs?
CISOs should focus on threat intelligence and security controls: OWASP Top 10 for LLM Applications, OWASP AI Exchange, MITRE ATLAS, Databricks DASF v3.0, and NIST AML Taxonomy (AI 100-2) for risk identification. OWASP AI Testing Guide and AIVSS for evaluation and scoring. NIST Cyber AI Profile (IR 8596), CSA AI Controls Matrix, and ISO 27001 for compliance evidence.
Which AI frameworks matter for compliance teams?
Compliance teams should start with the regulatory instruments: EU AI Act, GPAI Code of Practice, and any sector-specific guidance relevant to their jurisdiction. For internal compliance structure, ISO/IEC 42001 (certifiable management system) and ISO 27001 (information security baseline). For compliance evidence, NIST Cyber AI Profile (IR 8596) and COSAiS (SP 800-53 AI overlays) map security controls to audit-ready documentation.
How do AI safety frameworks differ from AI security frameworks?
Security frameworks protect AI systems from external threats: adversarial attacks, data poisoning, prompt injection, unauthorized access. They answer how do we defend the system. Safety frameworks ensure AI systems behave reliably and avoid causing harm: hallucination, bias, misuse, harmful outputs. They answer how do we trust the system's behaviour. Some frameworks span both. In this navigator, security and safety have separate columns because they require different expertise, different tools, and often different teams.
Selection methodology
Documents are selected against five filters: operational relevance to enterprise AI deployers, institutional authority or widespread adoption, functional coverage across the GRC operating model, geographic relevance (US, EU, GCC, Singapore, Korea), and currency or active maintenance. Pure research papers, vendor marketing, and superseded standards are excluded.
The navigator is a living reference. New documents are added as they are published. Existing entries are updated when new versions are released.