Why ISO 42001 is suddenly on everyone's roadmap
ISO 42001 is the first international standard specifically for AI management systems. Published in late 2023. Started showing up in enterprise procurement questionnaires in 2024. By 2025, a significant fraction of regulated buyers were asking for it or for a credible plan to get there.
The pressure has two compounding sources. First, enterprise buyers increasingly require it. Second, the EU AI Act's enforcement window for high-risk AI systems lands in August 2026, and ISO 42001 is widely recognized as the governance framework that demonstrates systematic compliance with Articles 9 through 15 (risk management, data governance, technical documentation, human oversight, accuracy). Organizations inside or selling into the EU have a concrete timeline now.
If you ship AI features to enterprise customers, this is probably on your roadmap already. If it is not, it will be by the time a large customer's procurement team gets involved.
The good news: it is less scary than it sounds. The standard is written in the same structural style as ISO 27001. If you have done SOC2 or ISO 27001, you know most of the patterns. The differences are in what the standard focuses on, not how it works.
This post is the practical version. What ISO 42001 actually requires, what it overlaps with, and what the work looks like for an engineering team.
The one-paragraph summary of the standard
ISO 42001 requires an organization to establish, implement, maintain, and continually improve an AI Management System, or AIMS. The AIMS is a set of policies, processes, and controls that govern how the organization develops, deploys, and operates AI systems. The standard does not tell you which technical controls to implement. It tells you that you need a system for deciding which controls to implement, documenting those decisions, and reviewing them.
This is the pattern from ISO 27001 applied to AI.
The four things an auditor will check
Strip away the clause numbering and the standard reduces to four areas.
1. Context and scope. The organization has identified which AI systems it builds or uses, what they do, who is affected, and what the risks are. This is not a paragraph. It is a structured inventory, usually called an AI system register, that gets reviewed and updated.
2. Leadership and policies. There are written policies for AI development, use, and oversight. Leadership has formally approved them. Someone owns the AIMS with defined accountability.
3. Risk and impact assessments. For each AI system in scope, there is a risk assessment and, where relevant, an AI impact assessment. These cover technical risks (model failure modes), operational risks (bad decisions in production), and societal risks (bias, harm to users, harm to third parties).
4. Operational controls. The technical and procedural controls that manage the identified risks. Things like data governance, model validation, human oversight, monitoring, incident response.
An audit walks through each of these, checks that the documentation exists, checks that it is being followed in practice, and checks that the organization is continuously improving the system.
How it overlaps with SOC2 and NIST AI RMF
Useful if you already have one or both.
SOC2. ISO 42001 overlaps with SOC2 on the common management system patterns. Access controls, change management, incident response, vendor management. If your SOC2 controls are in good shape, a large chunk of ISO 42001 is already done. The ISO 42001-specific additions are the AI system register, model validation, human oversight, and AI-specific risk assessment.
NIST AI RMF. This is a risk management framework, not a standard you can be certified against. But the structure maps cleanly to ISO 42001's risk assessment requirements. If you have done the NIST AI RMF work, you have most of the material for ISO 42001's risk assessments. You just need to wrap the rest of the management system around it.
A team with SOC2 Type II and the NIST AI RMF work done typically has 60 to 70 percent of ISO 42001 in place, just not documented in the ISO 42001 structure.
The AI system register
The first concrete deliverable. A structured inventory of every AI system in scope, with:
- System identifier and owner.
- Purpose. What it does.
- Data inputs. Where they come from, classification level.
- Data outputs. Who sees them, how they are used.
- Models used. Vendor, version, deployment mode.
- Risk classification. How the organization assesses the risk level.
- Relevant impact assessments.
- Current control set.
- Review cadence.
For a typical B2B SaaS shipping AI features, this might have five to twenty entries. For a larger organization with internal AI tools plus customer-facing features, it can be fifty to a hundred.
Maintain it as living documentation. In practice, a structured Notion or Confluence page works. A spreadsheet works. The format is less important than the maintenance.
Impact assessments
The piece most teams have not done. For each AI system, an assessment of its potential impact on users, third parties, and society.
For a customer-facing AI feature, typical topics:
- Who uses the feature.
- Who is affected by the feature beyond the direct user. Customer's customers, employees, contractors.
- What decisions the feature influences. Automated decisions versus human-in-the-loop.
- Known failure modes. Hallucination, bias in outputs, performance gaps across populations.
- Mitigations. Eval coverage, human review, feedback mechanisms.
For most product features this is a one to three page document. It gets updated when the feature changes significantly. Auditors will want to see that the updates actually happen.
The operational controls an audit looks for
Specific controls that show up across ISO 42001 audits:
Data governance for training and evaluation. Where training data comes from, what rights you have to use it, how you handle sensitive data, how you manage labeled data quality.
Model validation. Before deployment, the model has been tested against a defined eval suite. Results are documented. Regressions block deployment.
Human oversight. Where humans can review, override, or correct AI outputs. Documented at the system level. Training for the humans doing the oversight.
Monitoring in production. Dashboards for model performance, output distributions, error rates. Alerting when metrics drift.
Incident response. A runbook for AI-specific incidents. Hallucination producing harmful output. Model producing biased outputs. Prompt injection succeeding. Data leak through output.
Vendor management. For every third party model or AI service you use, a review of their controls, data handling, and risk posture. The same discipline you apply to data processors under GDPR or subprocessors under SOC2.
Training and awareness. The people building and operating AI systems have received training on the risks. This is typically a short module in your existing security awareness program plus a more substantial training for the engineers doing the direct work.
Who can certify you
A note on certification bodies. ISO 42001 is new, and accredited certification bodies are ramping up. ANAB (the US accreditation body) and other national accreditation bodies are actively accrediting certifiers. A draft standard, ISO/IEC 42006, specifies the requirements for bodies providing AI management system audits and certification. This means the certification landscape is still settling. When choosing a certifier, check their current accreditation status and their experience with the standard, not just their reputation for ISO 27001.
A realistic timeline
For a team with SOC2 already done:
Month one. AI system register built. Policies drafted. Gap analysis against the standard.
Month two. Risk assessments completed for every system in scope. Impact assessments for customer-facing systems. Policy gaps closed.
Month three. Operational controls audited internally. Remediation of gaps. Training delivered.
Month four. Readiness assessment with the auditor. Final remediation.
Month five. Stage 1 audit.
Month six. Stage 2 audit. Certification.
Faster is possible if the team is already mature. Longer is typical if SOC2 was superficial or if the AI system inventory is larger than expected.
What to do first
Two things.
Build the AI system register. Before you touch the standard, build the inventory. Even without the standard, this is useful. It is often the first time a team has written down what AI they are actually running.
Read the standard. The full text is about fifty pages. Read it once with a highlighter. Most of the clauses will feel familiar if you have done ISO 27001. The clauses that feel new are the ones you need to focus on.
Then decide if a certification is the right next step, or if implementing the AIMS without certification is enough for your current customers and regulators. Not every team needs the certification. Every team shipping AI should be doing the underlying work regardless.
Kai Token leads AI governance work at Fraktional. Has taken teams from "we should probably think about AI compliance" to ISO 42001 certification readiness. Thinks the AI system register is the highest leverage document you will write this year.