# What to do when OpsGenie sunsets

*April 13, 2026*

By now, most Opsgenie customers have heard the news: Atlassian is sunsetting Opsgenie in 2027. If you've been sitting with that information and haven't quite figured out what to do with it, you're not alone. I talk to engineering teams navigating this decision every week, and the volatility is consistent. They are uncertain, not because they don't know how to migrate, but because they haven't stopped to ask the more important question first.

Which is: what should you actually migrate _to_?

That decision deserves more time than most teams give it. Here's a framework for thinking it through.

---

## Why this moment is bigger than it looks

Most engineering teams don’t change their paging systems unless absolutely necessary, and for good reason. The disruption, the retraining, and the trust your on-call engineers have to rebuild in a new system before they feel comfortable relying on it during a 2 am incident all make the migration cost too high.

Because switching is so painful, most teams pick a tool and live with it for a long time. That means **the choice you make in response to the Opsgenie sunset will likely shape how your team manages incidents for the next five-plus years. It's worth getting right.**

There's also a less obvious factor at play: change is now inevitable regardless of what you decide. Every engineering team on Opsgenie will have to move. That disruption is coming whether you minimize it or maximize what you get out of it. The question isn't whether to change, it's whether to use this change to also fix the things that haven't been working.

---

## The three paths in front of you

When teams start evaluating their options, they typically land on one of three directions.

### Path 1: Move to JSM

This is the path of least resistance for teams already embedded in the Atlassian ecosystem. One vendor, familiar login, consolidated tooling. For IT teams managing service desk workflows, it might make sense.

But for engineering teams, the fit is less obvious. JSM is a ticketing system. It wasn't designed for the real-time, high-pressure nature of incident response, which requires rapid human coordination, live communication, and the ability to quickly provide context to responders. Engineers managing a SEV0 at 2 am don't want to navigate a ticketing interface with nothing but dropdown fields.

There's also a subtler risk: if you've been running Opsgenie for a few years, you almost certainly have technical debt baked in. You have alert rules set up and forgotten, escalation policies that no longer reflect your team structure, and monitors that fire constantly, waking people up but meaning nothing. Migrating to JSM doesn't force any of that to change. You bring the debt with you, rebrand it, and end up in the same place with a different UI.

### Path 2: Lift and shift to a new paging tool

Find something that works like Opsgenie, import your schedules and escalation policies, train your team, and move on. Try to move fast, with low disruption and minimal change management required.

This approach has genuine appeal, especially if you have a short timeline or limited internal bandwidth. But it carries the same underlying risk as the first path: you're solving the paging problem, without touching the broader incident response workflow.

Paging is roughly 10% of what happens when an incident occurs. The other 90% is triage, communication, coordination, and learning. If those parts of your workflow are fragmented today, they'll still be fragmented after a like-for-like swap.

### Path 3: Use this as a forcing function to upgrade the whole system

This is the option that requires the most upfront work, but delivers the most value over time. Rather than asking “what tool does the same thing as Opsgenie?”, instead you zoom out and ask: **“What does our entire incident response workflow actually look like end-to-end, and where are the real pain points?”**

If the honest answer is that paging is working fine but communication is slow, the context is scattered, and postmortems don't actually get done, then a pager swap won't help. You need something that addresses the whole workflow, not just the notification.

---

## How to know which path is right for you

Path three isn't the right answer for everyone. Some teams genuinely just need a solid paging tool and have the rest of their workflow covered. The way to figure out which camp you're in is to **_honestly_** assess your current state.

Here are the questions to ask yourself:

* **What isn't working well with your on-call program?**
  * _Increasing in complexity without scaling_
  * _Poor internal and/or external communication_
  * _Lack of learning, or lack of improvements to detection methods as a result of incidents_
  * _Everything is manual and cutting into your SLAs_
* **What isn't working with your broader tooling?**
  * _Separate, disconnected systems for alerting, paging, incident tracking, status pages, and communications_
  * _Engineering time is spent maintaining the glue code between those systems rather than building your product_
* **What are your on-call engineers actually complaining about?**
  * _Too much noise, not enough signal_
  * _No clear ownership_
  * _No feedback loop that leads to fewer pages over time_
  * _Only a few knowledge experts holding the pager, leading to burnout_
* **What are your managers struggling with?**
  * _No visibility into how much time the team is spending on reactive work_
  * _Difficulty juggling schedules, PTO, and holidays_
  * _Too much configuration just to get coverage_

If several of these land, a like-for-like swap is unlikely to move the needle because the disruption cost is roughly the same either way. So, you may as well use it to actually fix something.

![](https://cdn.sanity.io/images/oqy5aexb/production/0a5308d4247f1619ed8f3073faec5e597b03dfae-1920x1080.png)

---

## The technical debt problem

One thing Opsgenie customers need to be especially honest about? Years of accumulated technical debt.

After a few years in any paging system, teams accrue configurations that nobody owns, monitors that nobody reviews, and alert rules that were set up by people who have long since left the company. If you've grown, reorganized, or shifted to a more distributed engineering model, your Opsgenie setup likely reflects how your team was structured two or three years ago, and not how it is today.

You might have things that are quietly not working. Things like alerts routing to the wrong team, escalation paths pointing to former employees, and monitors configured so broadly that almost everything triggers them.

The migration is an opportunity to deal with this. Do the audit first, cut aggressively, and only migrate what actually needs to exist.

---

## Thinking about your timeline

Opsgenie's end-of-life date is 2027, which sounds like a lot of runway. For some teams, it is.

But if your Opsgenie contract renews before 2027, that renewal is a natural decision point. You may not want to pay for another year if you're planning to leave anyway. If you have a large number of teams and users to migrate, give yourself as much runway as you can afford. You'll need time to audit your current setup, clean up debt, build for the new model, and run an overlap period where both systems are live.

**Pro tip:** If you have more than a few hundred users, start planning now, even if execution is months away. The [planning phase](https://incident.io/blog/how-to-migrate-your-paging-tool-without-breaking-your-team), during which you’d take inventory, align stakeholders and change agents, and decide what you actually want to build, is not trivial. Doing it under pressure leads to shortcuts you'll regret.

---

## What "doing it right" actually looks like

If you go with path three, here's what you're signing up for, and why it's worth it.

Instead of migrating your alert rules as-is, you do an audit, cut the noise, and build a routing and response model based on actual code and service ownership. **This increases your signal, reduces burnout, and sets you up for scalability and automation.**

Instead of recreating your escalation policies, you rationalize them. **This means fewer paths, clearer ownership, and less configuration overhead.**

Instead of just switching where pages come from, you connect paging to the rest of your incident workflow so that when someone gets alerted, the context, communication channel, and automated communications are included. **This protects your SLAs and shortens your time to mitigate so that a responder can focus on fixing rather than on process or communication.**

Instead of finding another paging solution, invest in an end-to-end product. This will:

* **Give your business a holistic view of incident lifecycles to use them as a mechanism for improvement rather than just fighting fires**
* **Give managers access to how much time their engineers are spending on incidents so they can do resource planning**
* **Give incident commanders AI assistance during an incident, speeding up time to investigate, mitigate, and for post-incident learning**
* **Give your leadership the peace of mind, so they never have to ask “what’s going on?”, or “what happened?”, or “how are we improving?”**

The outcome isn't just a migration that's done. It's a team that spends less time on reactive work, has better visibility into incidents as they happen, can build a feedback loop that makes future incidents shorter and less frequent, and helps you build a more resilient product.

That’s the return on the investment in migration. But the only way to get it is to decide that you're going to use this moment for more than just a swap.

---

## Where to start

If you’re ready to go, check out my on-call migration best practices to start your planning.

If you're early in the decision-making process, the first thing to do isn't to evaluate vendors. It's time to get honest about your current state. Pull your list of schedules, escalation policies, and teams out of Opsgenie. Find out who owns what. Look at your alert volume from your monitoring sources and your acceptance rate. Understand where the manual work actually lives in your incident workflow.

That inventory will tell you more about which path is right for you than any product demo.

If you want to talk through what this decision looks like for your team specifically, understand what's realistic given your timeline, your size, and your current setup, feel free to reach out to us [here](https://incident.io/demo).

The Opsgenie sunset is a disruption, but it’s also an opportunity!

---

_Eryn Carman is a Strategic Customer Success Manager at incident.io. She has spent the last two years helping engineering teams plan and execute migrations from legacy on-call systems, working with companies ranging from a few hundred to tens of thousands of users._