New: AI-native post-mortems are here! Get a data-rich draft in minutes.

By now, most Opsgenie customers have heard the news: Atlassian is sunsetting Opsgenie in 2027. If you've been sitting with that information and haven't quite figured out what to do with it, you're not alone. I talk to engineering teams navigating this decision every week, and the volatility is consistent. They are uncertain, not because they don't know how to migrate, but because they haven't stopped to ask the more important question first.
Which is: what should you actually migrate to?
That decision deserves more time than most teams give it. Here's a framework for thinking it through.
Most engineering teams don’t change their paging systems unless absolutely necessary, and for good reason. The disruption, the retraining, and the trust your on-call engineers have to rebuild in a new system before they feel comfortable relying on it during a 2 am incident all make the migration cost too high.
Because switching is so painful, most teams pick a tool and live with it for a long time. That means the choice you make in response to the Opsgenie sunset will likely shape how your team manages incidents for the next five-plus years. It's worth getting right.
There's also a less obvious factor at play: change is now inevitable regardless of what you decide. Every engineering team on Opsgenie will have to move. That disruption is coming whether you minimize it or maximize what you get out of it. The question isn't whether to change, it's whether to use this change to also fix the things that haven't been working.
When teams start evaluating their options, they typically land on one of three directions.
This is the path of least resistance for teams already embedded in the Atlassian ecosystem. One vendor, familiar login, consolidated tooling. For IT teams managing service desk workflows, it might make sense.
But for engineering teams, the fit is less obvious. JSM is a ticketing system. It wasn't designed for the real-time, high-pressure nature of incident response, which requires rapid human coordination, live communication, and the ability to quickly provide context to responders. Engineers managing a SEV0 at 2 am don't want to navigate a ticketing interface with nothing but dropdown fields.
There's also a subtler risk: if you've been running Opsgenie for a few years, you almost certainly have technical debt baked in. You have alert rules set up and forgotten, escalation policies that no longer reflect your team structure, and monitors that fire constantly, waking people up but meaning nothing. Migrating to JSM doesn't force any of that to change. You bring the debt with you, rebrand it, and end up in the same place with a different UI.
Find something that works like Opsgenie, import your schedules and escalation policies, train your team, and move on. Try to move fast, with low disruption and minimal change management required.
This approach has genuine appeal, especially if you have a short timeline or limited internal bandwidth. But it carries the same underlying risk as the first path: you're solving the paging problem, without touching the broader incident response workflow.
Paging is roughly 10% of what happens when an incident occurs. The other 90% is triage, communication, coordination, and learning. If those parts of your workflow are fragmented today, they'll still be fragmented after a like-for-like swap.
This is the option that requires the most upfront work, but delivers the most value over time. Rather than asking “what tool does the same thing as Opsgenie?”, instead you zoom out and ask: “What does our entire incident response workflow actually look like end-to-end, and where are the real pain points?”
If the honest answer is that paging is working fine but communication is slow, the context is scattered, and postmortems don't actually get done, then a pager swap won't help. You need something that addresses the whole workflow, not just the notification.
Path three isn't the right answer for everyone. Some teams genuinely just need a solid paging tool and have the rest of their workflow covered. The way to figure out which camp you're in is to honestly assess your current state.
Here are the questions to ask yourself:
If several of these land, a like-for-like swap is unlikely to move the needle because the disruption cost is roughly the same either way. So, you may as well use it to actually fix something.

One thing Opsgenie customers need to be especially honest about? Years of accumulated technical debt.
After a few years in any paging system, teams accrue configurations that nobody owns, monitors that nobody reviews, and alert rules that were set up by people who have long since left the company. If you've grown, reorganized, or shifted to a more distributed engineering model, your Opsgenie setup likely reflects how your team was structured two or three years ago, and not how it is today.
You might have things that are quietly not working. Things like alerts routing to the wrong team, escalation paths pointing to former employees, and monitors configured so broadly that almost everything triggers them.
The migration is an opportunity to deal with this. Do the audit first, cut aggressively, and only migrate what actually needs to exist.
Opsgenie's end-of-life date is 2027, which sounds like a lot of runway. For some teams, it is.
But if your Opsgenie contract renews before 2027, that renewal is a natural decision point. You may not want to pay for another year if you're planning to leave anyway. If you have a large number of teams and users to migrate, give yourself as much runway as you can afford. You'll need time to audit your current setup, clean up debt, build for the new model, and run an overlap period where both systems are live.
Pro tip: If you have more than a few hundred users, start planning now, even if execution is months away. The planning phase, during which you’d take inventory, align stakeholders and change agents, and decide what you actually want to build, is not trivial. Doing it under pressure leads to shortcuts you'll regret.
If you go with path three, here's what you're signing up for, and why it's worth it.
Instead of migrating your alert rules as-is, you do an audit, cut the noise, and build a routing and response model based on actual code and service ownership. This increases your signal, reduces burnout, and sets you up for scalability and automation.
Instead of recreating your escalation policies, you rationalize them. This means fewer paths, clearer ownership, and less configuration overhead.
Instead of just switching where pages come from, you connect paging to the rest of your incident workflow so that when someone gets alerted, the context, communication channel, and automated communications are included. This protects your SLAs and shortens your time to mitigate so that a responder can focus on fixing rather than on process or communication.
Instead of finding another paging solution, invest in an end-to-end product. This will:
The outcome isn't just a migration that's done. It's a team that spends less time on reactive work, has better visibility into incidents as they happen, can build a feedback loop that makes future incidents shorter and less frequent, and helps you build a more resilient product.
That’s the return on the investment in migration. But the only way to get it is to decide that you're going to use this moment for more than just a swap.
If you’re ready to go, check out my on-call migration best practices to start your planning.
If you're early in the decision-making process, the first thing to do isn't to evaluate vendors. It's time to get honest about your current state. Pull your list of schedules, escalation policies, and teams out of Opsgenie. Find out who owns what. Look at your alert volume from your monitoring sources and your acceptance rate. Understand where the manual work actually lives in your incident workflow.
That inventory will tell you more about which path is right for you than any product demo.
If you want to talk through what this decision looks like for your team specifically, understand what's realistic given your timeline, your size, and your current setup, feel free to reach out to us here.
The Opsgenie sunset is a disruption, but it’s also an opportunity!
Eryn Carman is a Strategic Customer Success Manager at incident.io. She has spent the last two years helping engineering teams plan and execute migrations from legacy on-call systems, working with companies ranging from a few hundred to tens of thousands of users.


Migrating your paging tool is disruptive no matter what. The teams that come out ahead are the ones who use that disruption deliberately. Strategic CSM Eryn Carman shares the four-step framework she's used to help engineering teams migrate and improve their on-call programs.
Eryn Carman
Model your organization once, and let every workflow reference it dynamically. See how Catalog replaces hardcoded incident logic with scalable, low-maintenance automation.
Chris Evans
Post-mortems are one of the most consistently underperforming rituals in software engineering. Most teams do them. Most teams know theirs aren't working. And most teams reach for the same diagnosis: the templates are too long, nobody has time, nobody reads them anyway.
incident.ioReady for modern incident management? Book a call with one of our experts today.
