How to migrate your paging tool without breaking your team

March 20, 2026 — 15 min read

Most engineering teams don’t migrate their on-call and paging systems unless absolutely necessary. No matter how painful their current solution, it's one of those changes that people put off for as long as possible because the cost is real. The disruption, the retraining, the risk of missing a critical page during the transition. It's not something you do on a whim.

What makes you want to migrate? It could be a vendor end-of-life, a contract renewal for a tool that has caused you pain, a new engineering leader with a different vision, or a growing realization that your current setup just doesn't scale. Which is exactly why, when the moment finally arrives, the decision deserves more than a reflexive "find something similar and move the data over."

I've spent the last two years helping engineering teams plan and execute these migrations. I've seen teams do it in three weeks and others take five months. I've seen migrations that transformed how an organization responds to incidents, and migrations that just traded one set of problems for another. The difference almost always comes down to the same handful of decisions made at the start of the project.

Here's what I've learned.


The trap most teams fall into

When teams decide to migrate their paging system, the natural instinct is to minimize disruption: find a tool that does roughly the same thing, map your existing schedules and escalation policies across, train people on the new UI, and call it done.

This approach isn't wrong, exactly. But it's a missed opportunity.

Paging (getting the right person alerted at the right time) is roughly 10% of your incident management workflow. It's the starting gun. But then you have the other 90%: the triage, the communication, the context-gathering, the coordination, the updating your status page, the postmortem. If your current setup is fragmented across multiple tools, if engineers are manually stitching together Slack channels and status page updates and leadership emails during a P1, if your team is burning out on alert noise then a like-for-like paging swap won't fix any of that.

The teams that come out of a migration better than they went in are the ones who used it as a forcing function to rethink the full workflow, not just the pager.


How to know if you need more than a swap

Before you scope your migration project, it's worth asking honestly: what's actually broken?

Some questions to gut-check your current state:

  • Is your on-call program increasing in complexity without actually scaling?
  • Are your internal and external communications during incidents inconsistent or slow?
  • Do you know how much engineering time is spent on reactive work versus proactive work?
  • Do you have separate tools for alerting, paging, incident management, status pages, and comms — and do engineers manually stitch them together during incidents?
  • Is someone maintaining custom code just to connect your systems instead of building your actual product?
  • Is alert noise burning out your on-call engineers?
  • Do managers struggle to manage PTO, holidays, and escalation paths at scale?

If you recognized a few of these, a direct swap might not give you what you actually need. The good news: a migration you were already doing anyway can fix all of them.


The four-step migration framework

Whether you're moving 50 users or 5,000, the teams I’ve worked with to help run successful migrations follow the same playbook. Here it is, broken into four categories.

Step 1: Take inventory — and Marie Kondo the crap out of it

Before you think about timelines or tooling, get everything out of the closet.

Whip out your spreadsheets and pull complete lists of the following…

  • Schedules, escalation policies, and teams from your paging provider
  • Monitoring tools and things that are alerting you, and how many monitors you have
  • Relational metadata mapping of services to teams.
    • How are you determining who gets paged for what?
    • Do you have an internal service catalog that maps services to teams?
    • If not, do have a taxonomy to help you build a lightweight version for code ownership?
  • Integrations
  • Home built solutions
  • Use cases built on top of your current paging tool with APIs and webhooks
  • Internal documentation, runbooks and training
  • Reports and insights needs
  • What teams need to be train and which need to be informed

Be as complete as possible and assign a directly responsible individual for each item. If you can’t find who owns it, then surprise! It’s you.

Once you have the full picture, it’s time to clean house. Rank everything:

  • Keep
  • Adjust: Modify, improve or consolidate it
  • Delete

Be aggressive with that last bucket, up to the extent your culture allows. Most teams are carrying years of monitors nobody looks at, escalation paths nobody follows, and alert rules that fire constantly, wake someone up and mean nothing. This is your chance to cut them.

Why does this inventory matter beyond just cleaning house? Three reasons.

  1. Shows you the true scope of the project so you can set realistic expectations and determine what’s in and what’s been “Kondoed.”
  2. The inventory becomes your project tracking system throughout the migration.
  3. Most importantly, it gives you the data to prove ROI. If you can show leadership that you went from 5,000 monitors down to 100 business-critical ones, and that change saved your team hundreds of unnecessary pages per week, you have a business case. You have a story. That's worth the migration cost.

Step 2: Build your change agents

Migrations fail when they hit resistance mid-proof of concept or even mid-project. The way to prevent that is to get the right people in your corner before you start.

Look for three types of people:

The ones feeling the most pain. These are your best advocates. They can clearly articulate in real numbers and real stories that translate to money and time why the current system is broken. They'll help make the ROI case for you in rooms you're not in.

The ones with the most influence. A senior engineer or leader who's been around long enough to carry credibility. They can socialize the change broadly and move fence-sitters.

Your biggest detractors. The people who are going to push back and poke holes. Get them involved early. Use their superpowers for long term thinking and let them help identify blind spots. Detractors who are given real input become invested in making things work. Detractors who are ignored become blockers who can derail your whole project.

Once you've identified these folks, give them real ownership:

  1. Train them first on the new platform and process
  2. Let them be your focus group and testers
  3. Give them authority to speak on behalf of the project and field questions to lesson the burden on the core migration team
  4. Run internal campaigns with them and send them into team meetings to advocate for the migration

When your change agents own the narrative, adoption happens faster and with less friction.

Step 3: Build your project team (and be honest about the gaps)

Depending on your size, one person might play multiple roles here, and that's fine. But you need coverage across these functions:

  • An executive sponsor(s) to remove roadblocks and create organizational buy-in
  • A dedicated project manager to keep things on track and protect your technical team's focus
  • A service catalog expert to handle metadata mapping and taxonomy (more on why this matters below)
  • An IT liaison brought in early. Leaving IT out until the end is one of the most common ways migrations slow down
  • Two to three platform admins building in the new system. No more, or you'll lose track of who changed what and architecture philosophy might drift
  • Someone to own documentation and training updates
  • Someone to manage stakeholder communications on the right cadence

If you're short on any of these, a good vendor partner can help fill gaps during onboarding. But knowing what roles you need before you start is half the battle.

Step 4: Build your timeline (and use it wisely)

The most common question I get: do I have enough time to do this right?

Here's the answer: migrations can work in as little as three weeks or as long as five months to a year. What matters is how you use the time you have.

If you have a short runway:

  • Resist the temptation to fix everything at once and eat the whole elephant. Scope creep is your enemy.
  • Don’t change too much at once. Accept that some technical debt, pain points or inefficient processes will migrate with you. Too much change all at once and you risk having a revolt on your hands from end users.
  • Consider doing a bulk migration instead of team-by-team. Rip off the Band-Aid and use the time after go-live for cleanup.
  • Gamify and incentive. Find ways to make it fun. Getting 1000’s of engineers to download an app is what is going to take you the longest. Give people a real reason to do it early. The carrot first, the deadline as the stick.
  • Migration is just phase one - create post-migration phases that let you finish the job
    • Clean-up and scaling
    • Introduce new features
    • Introduce new processes

If you have a long runway:

  • Build for scalability, not just migration. Don't recreate what you had before. Build the version you should have had.
  • Clean up your technical debt before the migration, not during.
  • Move in cohorts and learn as you go. Your third team migration will be dramatically smoother than your first.
  • Front-load the improvements and new features that relieve the biggest pain points for your on-call engineers. Early wins create excitement and momentum, and can make the migration go viral internally.
  • Track your progress and celebrate wins along the way so you don’t lose momentum

Build your timeline to build trust.

No matter your timeline, build in at least a few weeks to a month of overlap where both systems are running. Engineers need to have trust in the new platform. Give them the autonomy to unplug the old system by deleting their own escalation policies or schedules when they're ready, rather than when you flip a switch.


The thing nobody plans for

After every migration I've run, there's always a moment (usually at the eleventh hour) where something surfaces that nobody accounted for. A critical Slack bot built on a proprietary API. A weekly leadership report that pulls data in an unexpected way. A team with a schedule so custom it doesn't map cleanly to anything standard.

It happens every time. The inventory helps, but it won't catch everything. This is why that month of overlap time is so crucial. It’s the buffer in your timeline just for this.

So build that expectation into your project. There will be unknowns. That's not a failure of planning. That's migration. The teams that handle surprises best are the ones who've built enough trust in the new platform, in the project team, and in the process to navigate them without losing momentum. Know that these things will come up and communicate that to stakeholders early.


A note on service catalog and ownership models

One thing I see teams consistently underinvest in, regardless of what tool they're moving from or to: service ownership and relationship mapping.

Without some form of catalog, even a simple one, teams end up perpetuating the same fragmented model they had before, instead of building long-term maintenance. Everyone builds their own monitors, PagerDuty services, alert rules, and escalation policies. There's no central source of truth for who owns what, and routing decisions remain dependent on whoever happens to know the answer.

You don't need to buy a dedicated catalog product to start. Begin with a spreadsheet: what are the services or components you care about when something breaks, who owns each one, and how do you reach them? Then build out the more complex hierarchy as you mature. How do they roll up into teams, products, or business units? What is their underlying infrastructure? How do those things connect to your VIP customers?

That foundation, even a basic one, changes what's possible with routing, context, and automated workflows downstream.


Where to start

If you have a migration coming up, here's the only thing you need to do today.

Start your inventory.

Pull your current schedules, escalation policies, and teams. Find out who owns what. Begin the ranking exercise. The shape of the project will become clear once you can see everything laid out. Then you'll spend the rest of your time making decisions with clarity rather than reacting to surprises.

Migrating your paging system will never be a small project. But it's also one of the few moments where the disruption is happening either way. The question is whether you use it to end up exactly where you are now, just on different software, or whether you come out with something that actually works the way your team deserves.

P.S. We recently held a webinar on migration, featuring two team leaders who recently made the move from Opsgenie. Watch it here.


Eryn Carman is a Strategic Customer Success Manager at incident.io. She has spent the last three years helping engineering teams plan and execute migrations from legacy on-call systems, working with companies ranging from a few hundred to tens of thousands of users.

Picture of Eryn Carman
Eryn Carman
Strategic Customer Success Manager
View more

See related articles

View all

So good, you’ll break things on purpose

Ready for modern incident management? Book a call with one of our experts today.

Signup image

We’d love to talk to you about

  • All-in-one incident management
  • Our unmatched speed of deployment
  • Why we’re loved by users and easily adopted
  • How we work for the whole organization