AI Safety Organizations Atlas

This interactive map catalogs 57+ organizations across 12 countries that shape AI standards, evaluate frontier model risks, publish security frameworks, and enforce emerging regulation. It is designed as a reference tool for Chief Risk Officers, compliance teams, AI governance professionals, and policymakers navigating the fragmented global landscape.
The directory covers organizations from government standards bodies like NIST and ISO to technical safety research labs like METR and Apollo Research, from EU AI Act enforcement bodies to GCC regional AI authorities. Each entry includes the organization's type, primary focus, key outputs, and geographic scope.
Built and maintained by MindXO.

*Last updated: February 2026*

What This Directory Covers

This database classifies AI governance and safety organizations into 12 categories:

**Standards Bodies** — Organizations that publish technical standards for AI systems. Includes NIST (AI Risk Management Framework 1.0, AI 600-1 GenAI profile), ISO/IEC (ISO 42001 AI Management System, ISO 23894 Risk Management), IEEE (P7000 series on ethical AI), and CEN-CENELEC JTC 21 (developing EU AI Act harmonized standards with 300+ experts).
**Government AI Safety Institutes** — National institutions established to evaluate frontier AI capabilities and risks. The UK AI Security Institute (formerly AISI, renamed February 2025) developed the open-source Inspect evaluation framework. The US Center for AI Standards and Innovation (CAISI, renamed from AISI in June 2025) operates under NIST. Other national safety institutes operate in Japan, Singapore (AISIS, which maintains the AI Verify testing toolkit), South Korea (KAISI), Canada, and France (through Inria).
**Security-Focused Organizations** — Bodies addressing adversarial AI threats and application security. OWASP publishes the Top 10 for LLM Applications. MITRE maintains ATLAS (Adversarial Threat Landscape for AI Systems) and ATT&CK frameworks. CISA provides AI security guidance for critical infrastructure.
**Technical Safety Research** — Independent nonprofits conducting frontier AI safety research. METR (Model Evaluation and Threat Research) specializes in autonomous capability evaluations and time-horizon benchmarks. CAIS (Center for AI Safety) funds compute grants and published the Statement on AI Risk. MIRI focuses on foundational alignment theory. Apollo Research detects deception and scheming in frontier models. LawZero, founded by Yoshua Bengio in June 2025 with $30M, develops safe-by-design AI architectures. CeSIA (France) maintains the AI Safety Atlas and BELLS benchmark.
**Governance & Policy Research** — Academic and nonprofit institutions producing AI governance research and policy analysis. GovAI (Oxford) publishes working papers and runs fellowships. Stanford HAI produces the annual AI Index Report. CSET (Georgetown) provides nonpartisan analysis on AI and national security. The Ada Lovelace Institute and Alan Turing Institute (UK) focus on AI rights, ethics, and trustworthy AI.
**Intergovernmental & Multilateral Bodies** — The OECD AI Policy Observatory maintains the OECD AI Principles adopted by 46+ countries. UNESCO published the Recommendation on the Ethics of AI in 2021 (194 member states). The Council of Europe adopted the Framework Convention on AI in 2024 — the first legally binding international AI treaty. The WEF AI Governance Alliance coordinates 600+ members on responsible AI adoption.
**EU AI Act Governance Bodies** — The European AI Office (established 2024) enforces GPAI model obligations and developed the GPAI Code of Practice. The EU AI Scientific Panel (Article 68) comprises 60 independent experts assessing systemic risks from general-purpose AI models. ECAT (European Centre for Algorithmic Transparency), based at the JRC in Seville, supports algorithmic auditing under both the DSA and AI Act.
**Industry Consortia** — The Frontier Model Forum coordinates safety practices among leading AI developers. AI Verify Foundation (Singapore) maintains open-source governance testing tools. MLCommons publishes AI Safety Benchmarks (v0.5) alongside MLPerf performance benchmarks.
**Civil Society & Advocacy** — Organizations like the Algorithmic Justice League (AI bias research), Electronic Frontier Foundation (digital rights), and All Tech Is Human (responsible tech ecosystem mapping).
**Professional & Audit Bodies** — IAPP operates the AI Governance Center and professional certifications. ISACA integrates AI into COBIT governance frameworks. ForHumanity develops independent audit certification schemes aligned with the EU AI Act.
**AI Risk Databases** — The MIT AI Risk Repository (v4, December 2025) catalogs 1,700+ risks from 65+ frameworks using a dual Causal Taxonomy and Domain Taxonomy. The AI Incident Database (AIID) crowdsources 1,200+ real-world AI incident reports.
**GCC & Regional** — Smart Dubai / Dubai AI Office published AI Ethics Guidelines. SDAIA (Saudi Arabia) maintains the national AI Ethics Framework. The Digital Cooperation Organization coordinates AI governance across 15 GCC-led member states. Abu Dhabi Digital Authority provides government AI governance guidelines.

*Last updated: February 2026*

Methodology

Organizations were identified through systematic research across government publications, standards body registries, academic databases, and AI safety community resources. Inclusion criteria: the organization must (a) be a recognized entity (not an individual researcher), (b) produce outputs that directly influence AI governance, safety evaluation, risk management, or regulatory enforcement, and (c) operate at a national or international level. Commercial AI vendors and for-profit consultancies are excluded.
Classification uses a taxonomy of 12 organizational types, 11 focus areas, and geographic scope designations. The interactive map geocodes organizations to their headquarters or primary operating location.‍

Learn more

Let’s talk governance and risk.
Whether you’re defining your AI strategy, strengthening governance, or operationalizing risk through tooling, we’ll help you take the right next step.

Get in touch

→ Get in touch

FAQs

Here are some of the most common questions we get. If you're wondering about something else, just reach out: we're all ears.

Are you vendor-agnostic, or do you work with specific tech providers?
Absolutely vendor-agnostic. We help you select the best-fit tools and platforms for your needs, not ours. Our frameworks are designed to be modular and compatible with AWS, Azure, G42, Google Cloud, and more.
Do you have experience working in the GCC region?
Yes, this is our home turf. We’ve advised government entities, telcos, and enterprises across the GCC for over a decade, with deep understanding of regional goals, digital policies, and AI ambitions.
How do you handle data residency and compliance in the GCC?
Compliance is built-in, not bolted on. We help you navigate and meet local requirements from data localization and classification to national cloud compliance (e.g., G42 RTE, CITRA, NDMO). We embed secure-by-design practices into every stack.
Can you work with our existing infrastructure?
Yes, no need to rip and replace. Our approach is integration-first. Whether you use legacy systems or modern cloud stacks, we design workflows by bridging what you already have.
How long does it take to see results?
Depending on the scope, early value can be realized in as little as 6–12 weeks through quick wins, roadmap clarity, or systems integration. We also support longer-term transformation programs.
→ See more Q&As