Measuring incident impact can be nuanced: what defines the worst kind of incident? The one that hit the most customers? Cost the most money? Caused the most disruption?
There’s one measure that cuts through the ambiguity: how much time people spent actually dealing with it.
Forgetting the business-specific aspects, time spent in incidents is time not spent building product or helping customers. It’s a hidden cost of keeping the lights on, and even a single incident can quietly pull in a crowd of people.
If you could assign a value to every hour someone spends inside an incident, how many questions could that answer?
There are a few different ways you might try to measure how long an incident really costs you, but only one method gives a truly accurate view into the effort and impact. Let’s break down the options:
Start-to-end duration: Simple on the surface, but slippery in reality. When does the clock start—when the problem began, or when someone declared the incident? It also doesn’t tell you how many people were involved or how intensively.
Duration × headcount: A bit better. This multiplies the incident’s duration by the number of participants to give a rough person-hours estimate. But it’s noisy: being in the channel doesn’t mean you were active. Lurkers exist.
Actual person-hours: This is the gold standard. It tracks how much time each individual actively spends working on the incident. Harder to measure? Sure. But totally doable with the right signals—think Slack messages, Zoom calls, and other digital breadcrumbs.
This last one is what we focus on: person-hours as a direct measure of time spent in incidents.
We’ve kept things platform-neutral up to this point, but getting an accurate read on incident time is tricky without some tooling help. So the rest of this section includes a few incident.io-specific details.
If you're using incident.io, we help you quantify how much time was spent on each incident. That data can be split across any dimensions you’ve configured—like severity, custom fields, or other metadata.
We also break down that time by when it occurred: during working hours, later in the evening, or in the middle of the night, since all hours are not created equal.
If you’re tagging your incidents with metadata like affected features or services, you can use that to slice the data and see what’s eating up your team’s time.
Having this data is incredibly useful. It helps you understand the real operational impact of the work you're shipping, and forecast the cost of similar projects in the future.
It’s easy to feel the theme of recent incidents based on your last week or two of memory, but that’s prone to recency bias, and no one has a full view of every incident. Time-spent data fixes that. It shines a light on the slow-burn operational issues that quietly steal your team’s time and focus.
It’s some of the most valuable insight you can get from incident data, and it makes a powerful case when you’re talking about technical debt, risky areas, or resource planning.
If you’re like us, you’re probably thinking: “This sounds cool, but how does it work, and how do I trust the data?”
It's an entirely reasonable question, and most time-tracking tools are either wildly inaccurate or annoying to use. We skip all that by inferring time from actual incident activity.
Here’s how it works:
If someone posts in the incident channel at 10:11am, we immediately assign them 10 minutes of incident time.
If they then update a Zendesk ticket tied to that incident at 10:15am, we tighten the estimate: that’s 4 minutes of “actual” time between events, so we update the timeline accordingly and assign them another 10 minutes.
This works really well for the typical responder who's focused on one incident at a time.
For folks like incident managers, who might juggle multiple incidents in parallel, we cap things. We never double-count overlapping time across different incidents, i.e. one person can only rack up a max of 60 minutes in a 60-minute window.
We’re confident this method gives a solid, representative view of where time is going, and how much effort individuals are putting in.
Not all hours are equal. Getting pulled into an incident at 2pm is not the same as getting paged at 2am.
That’s why we split time spent in incidents into three buckets:
This breakdown is based on the user’s local time, pulled from their Slack timezone (which tends to be reliable because Slack syncs with your device).
This helps spot patterns in out-of-hours work and understand the real human impact of your incidents.
Time spent in incidents is an excellent way to measure incident impact. It avoids much of the fuzziness and subjectivity that come with other methods.
And it opens the door to far more meaningful questions:
It’s now one of the core metrics we use across all of our insights at incident.io, and it’s only going to get more powerful as we loop in additional teams like Customer Support and Sales, giving you a full view of everyone involved in resolving incidents.