Our Customer Just Asked About Our AI Governance. We Don't Have Any.
Twelve months ago, no one was asking. Today, three questions are appearing in enterprise security questionnaires and investor due diligence packs with increasing regularity: How do you govern the AI models embedded in your product? What framework do you use to assess AI risk, bias, and model drift? Are you aligned with ISO 42001 or preparing for EU AI Act obligations? If you are building or deploying AI in an enterprise-facing product, you will get these questions. The only question is whether you get them from a buyer you are trying to close, an investor conducting diligence, or a regulator issuing a notice. In all three cases, "we haven't thought about it yet" is the wrong answer.
ISO/IEC 42001:2023 is the first international management system standard for Artificial Intelligence. Published in December 2023, it specifies requirements for establishing, implementing, maintaining and continually improving an AI Management System (AIMS). In structure it is closely modelled on ISO 27001 (the same management system backbone, the same clause architecture, the same Stage 1 and Stage 2 audit pathway), but the controls address AI-specific risks: model governance, data quality, transparency, human oversight, bias and fairness, lifecycle management, and impact assessment. If you already hold ISO 27001, ISO 42001 is an extension, not a second certification from scratch.
Three converging pressures explain why buyers are asking now. First, the EU AI Act entered into force in August 2024, with staged application dates through 2025 to 2027. Enterprise buyers with any EU operations are starting to screen their AI suppliers for EU AI Act alignment, and ISO 42001 is the most efficient evidence of alignment. Second, the UK government's AI Regulation White Paper and subsequent AI Bill consultation have pushed AI governance up the risk register at every regulated UK business. Third, boards and audit committees at larger enterprises now require AI risk to be tracked alongside cyber risk, and the only way for a procurement function to evidence AI risk management across a supplier base is to ask for certification.
Based on engagements we have seen in the last six months, the buyers driving these questions are concentrated in four segments: financial services buyers under FCA, EBA, or DORA supervision who now treat AI as a critical third-party risk; healthcare and life sciences buyers who face MHRA and MDR scrutiny on AI-enabled clinical decision support; large enterprise legal, HR, and recruitment functions deploying AI into decision workflows with discrimination exposure; and Series B and later investors whose LPs are requiring AI governance evidence as a portfolio-level diligence item.
Buyers asking about AI governance are not expecting a five-year certification programme. They are looking for evidence of four things, in decreasing order of importance. First, an accountable owner for AI risk: someone named, with a defined remit. Second, a documented AI inventory: which models you run, what data trains them, where they are deployed, what decisions they influence, and who is the human in the loop. The absence of an inventory is the single most common failure point we see on diligence calls. Third, a framework the buyer recognises, with ISO 42001 being the most defensible answer because it is the international standard. Fourth, impact assessments for high-risk AI use cases, especially if any models touch decisions about access to credit, employment, housing, insurance, healthcare, or legal outcomes.
For a company already holding ISO 27001, ISO 42001 is a 10 to 14 week extension. The AIMS inherits the management system backbone from the existing ISMS (document control, management review, internal audit, corrective action), and you add the AI-specific controls on top. For a company without ISO 27001, doing both in parallel is often cleaner and cheaper than sequencing them, with a combined 14 to 18 week programme being realistic. Two things you cannot shortcut: the AI impact assessment methodology needs to be genuinely applied to your real use cases, not a template with your logo on it, and the AI inventory has to be complete because auditors will ask to trace a specific model end-to-end.
If an enterprise buyer or investor has asked you about AI governance in the last fourteen days, you have three priorities this week. One: buy yourself calendar by confirming to the buyer that you are commencing an ISO 42001 readiness assessment with a named implementation partner. Do not claim certification; claim the project. Two: book a gap analysis to get an AI inventory, an initial risk register, and a certification timeline you can share under NDA. Three: decide on scope, whether ISO 42001 alone is sufficient or whether a combined ISO 27001 plus ISO 42001 programme is the right answer for your commercial position. The companies that act on this question in the next six months will own the category. The companies that wait will be answering procurement questionnaires against competitors who already hold the certificate.
Related Articles
Ready to get certified?
Book a free consultation to discuss your certification needs. Our team will assess your current position and recommend the fastest path to compliance.