New: AI-native post-mortems are here! Get a data-rich draft in minutes.

TL;DR
Most engineering teams don't migrate their paging system unless they have to. A vendor end-of-life. A painful contract renewal. A new engineering leader with a different vision. When the moment arrives, the natural instinct is to minimize disruption: find something similar, move the data over, retrain people on the new UI, and call it done.
That instinct is understandable — but it's a missed opportunity. The teams that come out of a migration better than they went in are the ones who used it as a forcing function to rethink their full incident workflow, not just swap the pager.
Over the last two years, I've helped engineering teams plan and execute these migrations — from 50 users to 5,000. I've seen migrations take three weeks and others take five months. Here's the playbook that consistently works.
Paging — getting the right person alerted at the right time — is roughly 10% of your incident management workflow. It's the starting gun. But then you have the other 90%: triage, communication, context-gathering, coordination, status page updates, and postmortems.
According to incident.io's 2026 SRE practices guide, teams that consolidate their incident tooling report MTTR reductions of up to 80% and postmortem time cut from 90 minutes to under 10. A paging swap alone doesn't move these numbers.
If engineers are manually stitching together Slack channels, status page updates, and leadership emails during a P1 — a new paging vendor won't fix that. If alert noise is burning out your on-call rotation — a new UI won't fix that either.
The teams that come out ahead are the ones who used migration as the opportunity to fix the full workflow.
Before scoping the project, ask honestly: what's actually broken? The following questions help gut-check your current state.
If several of these ring true, a direct swap likely won't deliver what you actually need. The good news: a migration you were already doing anyway can fix all of them.
Whether migrating 50 users or 5,000, the teams that succeed follow the same four-step playbook.
Before thinking about timelines or tooling, get everything out of the closet. Pull complete lists of the following:
Assign a directly responsible individual to each item. If you can't find who owns it — congratulations, it's you now.
"Most teams are carrying years of monitors nobody looks at, escalation paths nobody follows, and alert rules that fire constantly and mean nothing. This is your chance to cut them." — Eryn Carman, Strategic CSM, incident.io
Once you have the full picture, rank everything: Keep, Adjust, or Delete. Be aggressive with that last bucket.
Teams that use migration to audit their monitors commonly reduce total monitor count by 80–95%. If you can show leadership you went from 5,000 monitors to 100 business-critical ones, you have a business case with real numbers — not just a tool swap.
This inventory serves three purposes: it shows the true scope of the project, it becomes your project tracking system throughout the migration, and it gives you the data to prove ROI to leadership.
Migrations fail when they hit resistance mid-project. The way to prevent that is to get the right people on your side before you start — not after.
Look for three types of people:
Once identified, give them real ownership: train them first on the new platform, let them be your focus group and testers, give them authority to speak on behalf of the project, and send them into team meetings to advocate for the migration. When change agents own the narrative, adoption happens faster and with less friction.
One person may play multiple roles depending on your size — that's fine. But you need coverage across all of these functions:
Migrations work in as little as three weeks or as long as five months. What matters is how you use the time you have.
If you have a short runway (3–8 weeks):
If you have a long runway (3–6 months):
No matter your timeline, always build in at least 4 weeks of parallel running. Engineers need to build trust in the new platform. Give them the autonomy to unplug the old system themselves — by deleting their own escalation policies when they're ready — rather than when you flip a switch.
After every migration, something surfaces that nobody accounted for. A critical Slack bot built on a proprietary API. A weekly leadership report that pulls data in an unexpected way. A team with a schedule so custom it doesn't map to anything standard.
It happens every time. The inventory helps, but it won't catch everything. This is exactly why the parallel running period is so crucial — it's the buffer in your timeline built specifically for this.
Set this expectation with stakeholders early: there will be unknowns. That's not a failure of planning. That's migration. The teams that handle surprises best are the ones who've built enough trust — in the new platform, in the project team, and in the process — to navigate them without losing momentum.
One of the most consistently underinvested areas in migrations — regardless of what tool teams are moving from or to — is service ownership and relationship mapping.
Without a service catalog, even a basic one, teams perpetuate the same fragmented model they had before. Everyone builds their own monitors, alert rules, and escalation policies. There's no central source of truth for who owns what, and routing decisions remain dependent on whoever happens to know the answer.
You don't need a dedicated catalog product to start. Begin with a spreadsheet:
That foundation — even a basic one — changes what's possible with routing, context, and automated workflows downstream.
There's only one thing you need to do today: start your inventory.
Pull your current schedules, escalation policies, and teams. Find out who owns what. Begin the ranking exercise. The shape of the project will become clear once you can see everything laid out — and you'll spend the rest of the migration making decisions with clarity rather than reacting to surprises.
Migrating your paging system will never be a small project. But it's also one of the few moments where the disruption is happening either way. The question is whether you use it to end up exactly where you are now, just on different software — or whether you come out with something that actually works the way your team deserves.
P.S. We recently held a webinar on migration, featuring two team leaders who recently made the move from Opsgenie. Watch it here.
Eryn Carman is a Strategic Customer Success Manager at incident.io. She has spent the last three years helping engineering teams plan and execute migrations from legacy on-call systems, working with companies ranging from a few hundred to tens of thousands of users.


By now, most Opsgenie customers have heard the news: Atlassian is sunsetting Opsgenie in 2027. If you've been sitting with that information and haven't quite figured out what to do with it, you're not alone.
Eryn Carman
Migrating your paging tool is disruptive no matter what. The teams that come out ahead are the ones who use that disruption deliberately. Strategic CSM Eryn Carman shares the four-step framework she's used to help engineering teams migrate and improve their on-call programs.
Eryn Carman
Model your organization once, and let every workflow reference it dynamically. See how Catalog replaces hardcoded incident logic with scalable, low-maintenance automation.
Chris EvansReady for modern incident management? Book a call with one of our experts today.
