
ITIL 5 launched in January 2026, and for the first time in the framework's 40-year history, AI governance is front and center. If you're running incident management, on-call rotations, or building operational tooling, this matters: the gap between AI adoption and AI governance is about to become a compliance and operational risk issue.
I’m not usually a big ITIL fan, but this guidance has some genuinely useful framing and questions. Your mileage may vary, but I suspect there’s something helpful here regardless of how you feel about ITIL.
Here's what you need to know.
ITIL 4 was released in 2019, before ChatGPT, before AI agents became mainstream, and before every operations team started experimenting with LLMs to write postmortems and triage alerts. The world has changed (quite a lot!) and ITIL 5 is the framework's response.
The most significant addition is a dedicated AI Governance extension module, which is the only extension module in the entire ITIL 5 qualification scheme. It's focused on helping organizations adopt AI "responsibly, ethically, and compliantly," covering risk management, transparency, accountability, and regulatory compliance.
This isn't just theoretical. The framework acknowledges a stark reality: while 90% of organizations now use AI in daily operations, only 18% have a fully implemented AI governance framework.
For AI agents specifically, 82% of organizations are using them, but only 44% have policies in place to secure them.
That's a governance gap waiting to become an incident.
ITIL 5 makes a clear argument: traditional IT governance frameworks weren't designed for AI's unique challenges. The framework identifies four specific problems:
For incident management specifically, these challenges show up in concrete ways. When an AI agent suggests a remediation action during an incident, who's accountable if it makes things worse? When an AI summarizes an incident for stakeholders, how do you ensure it's not hallucinating details? When automation takes an action based on pattern matching, what's the audit trail?
ITIL 5 proposes that organizations need to extend their governance across four perspectives:
Decision authority and risk management Who has authority over AI decisions? How are risks identified, assessed, and mitigated? In incident management terms: who approves an AI agent's ability to page someone, run a remediation script, or communicate with customers?
Ethical principles and responsible AI How do you ensure AI use aligns with organizational values? This includes bias testing and fairness—relevant when AI is prioritizing incidents or suggesting who should respond.
Data governance and performance management What data is the AI trained on? How do you monitor its performance and catch drift? For incident tooling: is your AI learning from your actual incidents, and how do you know it's improving rather than degrading?
Regulatory compliance and operational standards What regulations apply? The EU AI Act classifies AI systems by risk level—and AI that affects critical infrastructure decisions could fall into higher-risk categories.
Everybody loves a forced alliterative framework to land a point, and ITIL 5 introduces the 6C model to help organizations categorize their AI use cases. The six capabilities are:
Each capability carries different governance requirements. An AI that curates alerts needs different oversight than one that coordinates incident response actions.
If you're building or buying AI-powered incident tooling, ITIL 5 provides a framework for thinking about governance that goes beyond "does it work?"
AI agents that assist with debugging (think analyzing logs, suggesting hypotheses, correlating signals, etc like our AI SRE) fall primarily into the Cognition and Clarification categories. The governance questions to ask:
Turning incident details into clear communications for stakeholders is a Creation and Communication task. Governance considerations:
Orchestrating incident response (assigning responders, escalating, triggering automations) is Coordination. This is where governance matters most:
ITIL 5 outlines a practical path forward:
1. Stress-test current frameworks Take your existing incident management policies and ask: do they account for AI decision-making? Most runbooks and escalation policies were written assuming humans in the loop at every step.
2. Define AI-specific requirements Identify the gaps. Where does your current governance assume human judgment that AI is now providing? Document what oversight AI-driven decisions need.
3. Design governance adjustments Extend your controls—don't rebuild from scratch. Add approval workflows for AI actions, define confidence thresholds, establish audit logging requirements.
4. Operate governance dynamically This isn't a one-time exercise. As AI capabilities evolve and regulations change, governance needs continuous monitoring and adaptation.
ITIL 5 signals that AI governance is moving from "nice to have" to "expected practice." For incident management teams, this means:
The 90/18 gap—90% using AI, only 18% with governance—won't last. Whether through regulatory pressure or through incidents caused by ungoverned AI, organizations will need to close it.
The question is whether you do it proactively, with a framework like ITIL 5 guiding the way, or reactively, after something goes wrong.

I'm one of the co-founders, and the Chief Product Officer here at incident.io.
Ready for modern incident management? Book a call with one of our experts today.
