New: AI-native post-mortems are here! Get a data-rich draft in minutes.

Most engineering teams don’t migrate their on-call and paging systems unless absolutely necessary. No matter how painful their current solution, it's one of those changes that people put off for as long as possible because the cost is real. The disruption, the retraining, the risk of missing a critical page during the transition. It's not something you do on a whim.
What makes you want to migrate? It could be a vendor end-of-life, a contract renewal for a tool that has caused you pain, a new engineering leader with a different vision, or a growing realization that your current setup just doesn't scale. Which is exactly why, when the moment finally arrives, the decision deserves more than a reflexive "find something similar and move the data over."
I've spent the last two years helping engineering teams plan and execute these migrations. I've seen teams do it in three weeks and others take five months. I've seen migrations that transformed how an organization responds to incidents, and migrations that just traded one set of problems for another. The difference almost always comes down to the same handful of decisions made at the start of the project.
Here's what I've learned.
When teams decide to migrate their paging system, the natural instinct is to minimize disruption: find a tool that does roughly the same thing, map your existing schedules and escalation policies across, train people on the new UI, and call it done.
This approach isn't wrong, exactly. But it's a missed opportunity.
Paging (getting the right person alerted at the right time) is roughly 10% of your incident management workflow. It's the starting gun. But then you have the other 90%: the triage, the communication, the context-gathering, the coordination, the updating your status page, the postmortem. If your current setup is fragmented across multiple tools, if engineers are manually stitching together Slack channels and status page updates and leadership emails during a P1, if your team is burning out on alert noise then a like-for-like paging swap won't fix any of that.
The teams that come out of a migration better than they went in are the ones who used it as a forcing function to rethink the full workflow, not just the pager.
Before you scope your migration project, it's worth asking honestly: what's actually broken?
Some questions to gut-check your current state:
If you recognized a few of these, a direct swap might not give you what you actually need. The good news: a migration you were already doing anyway can fix all of them.
Whether you're moving 50 users or 5,000, the teams I’ve worked with to help run successful migrations follow the same playbook. Here it is, broken into four categories.
Before you think about timelines or tooling, get everything out of the closet.
Whip out your spreadsheets and pull complete lists of the following…
Be as complete as possible and assign a directly responsible individual for each item. If you can’t find who owns it, then surprise! It’s you.
Once you have the full picture, it’s time to clean house. Rank everything:
Be aggressive with that last bucket, up to the extent your culture allows. Most teams are carrying years of monitors nobody looks at, escalation paths nobody follows, and alert rules that fire constantly, wake someone up and mean nothing. This is your chance to cut them.
Why does this inventory matter beyond just cleaning house? Three reasons.
Migrations fail when they hit resistance mid-proof of concept or even mid-project. The way to prevent that is to get the right people in your corner before you start.
Look for three types of people:
The ones feeling the most pain. These are your best advocates. They can clearly articulate in real numbers and real stories that translate to money and time why the current system is broken. They'll help make the ROI case for you in rooms you're not in.
The ones with the most influence. A senior engineer or leader who's been around long enough to carry credibility. They can socialize the change broadly and move fence-sitters.
Your biggest detractors. The people who are going to push back and poke holes. Get them involved early. Use their superpowers for long term thinking and let them help identify blind spots. Detractors who are given real input become invested in making things work. Detractors who are ignored become blockers who can derail your whole project.
Once you've identified these folks, give them real ownership:
When your change agents own the narrative, adoption happens faster and with less friction.
Depending on your size, one person might play multiple roles here, and that's fine. But you need coverage across these functions:
If you're short on any of these, a good vendor partner can help fill gaps during onboarding. But knowing what roles you need before you start is half the battle.
The most common question I get: do I have enough time to do this right?
Here's the answer: migrations can work in as little as three weeks or as long as five months to a year. What matters is how you use the time you have.
If you have a short runway:
If you have a long runway:
Build your timeline to build trust.
No matter your timeline, build in at least a few weeks to a month of overlap where both systems are running. Engineers need to have trust in the new platform. Give them the autonomy to unplug the old system by deleting their own escalation policies or schedules when they're ready, rather than when you flip a switch.
After every migration I've run, there's always a moment (usually at the eleventh hour) where something surfaces that nobody accounted for. A critical Slack bot built on a proprietary API. A weekly leadership report that pulls data in an unexpected way. A team with a schedule so custom it doesn't map cleanly to anything standard.
It happens every time. The inventory helps, but it won't catch everything. This is why that month of overlap time is so crucial. It’s the buffer in your timeline just for this.
So build that expectation into your project. There will be unknowns. That's not a failure of planning. That's migration. The teams that handle surprises best are the ones who've built enough trust in the new platform, in the project team, and in the process to navigate them without losing momentum. Know that these things will come up and communicate that to stakeholders early.
One thing I see teams consistently underinvest in, regardless of what tool they're moving from or to: service ownership and relationship mapping.
Without some form of catalog, even a simple one, teams end up perpetuating the same fragmented model they had before, instead of building long-term maintenance. Everyone builds their own monitors, PagerDuty services, alert rules, and escalation policies. There's no central source of truth for who owns what, and routing decisions remain dependent on whoever happens to know the answer.
You don't need to buy a dedicated catalog product to start. Begin with a spreadsheet: what are the services or components you care about when something breaks, who owns each one, and how do you reach them? Then build out the more complex hierarchy as you mature. How do they roll up into teams, products, or business units? What is their underlying infrastructure? How do those things connect to your VIP customers?
That foundation, even a basic one, changes what's possible with routing, context, and automated workflows downstream.
If you have a migration coming up, here's the only thing you need to do today.
Start your inventory.
Pull your current schedules, escalation policies, and teams. Find out who owns what. Begin the ranking exercise. The shape of the project will become clear once you can see everything laid out. Then you'll spend the rest of your time making decisions with clarity rather than reacting to surprises.
Migrating your paging system will never be a small project. But it's also one of the few moments where the disruption is happening either way. The question is whether you use it to end up exactly where you are now, just on different software, or whether you come out with something that actually works the way your team deserves.
P.S. We recently held a webinar on migration, featuring two team leaders who recently made the move from Opsgenie. Watch it here.
Eryn Carman is a Strategic Customer Success Manager at incident.io. She has spent the last three years helping engineering teams plan and execute migrations from legacy on-call systems, working with companies ranging from a few hundred to tens of thousands of users.


Model your organization once, and let every workflow reference it dynamically. See how Catalog replaces hardcoded incident logic with scalable, low-maintenance automation.
Chris Evans
Post-mortems are one of the most consistently underperforming rituals in software engineering. Most teams do them. Most teams know theirs aren't working. And most teams reach for the same diagnosis: the templates are too long, nobody has time, nobody reads them anyway.
incident.io
This is the story of how incident.io keeps its technology stack intentionally boring, scaling to thousands of customers with a lean platform team by relying on managed GCP services and a small set of well-chosen tools.
Matthew Barrington Ready for modern incident management? Book a call with one of our experts today.
