Updated November 25, 2025
TL;DR: Atlassian is sunsetting Opsgenie with a hard deadline of April 5, 2027. You can export all critical data via the Opsgenie REST API, including users, schedules, routing rules, and historical incidents. The proven migration strategy is a parallel run: configure your new platform, dual-page alerts to both systems for 2-4 weeks, validate the new setup, then cut over. Modern alternatives like incident.io automate schedule imports and deploy in days rather than months. Budget for engineer time as your largest migration cost because implementation typically consumes 40-200 hours depending on team size and platform choice.
Atlassian announced in March 2025 that Opsgenie will no longer accept new customers starting June 4, 2025, and the service reaches end-of-life on April 5, 2027. These are hard dates that force every Opsgenie customer to migrate.
June 4, 2025: End of Sale. New customers cannot purchase Opsgenie. Existing customers can still renew, but this signals Atlassian is pushing users toward Jira Service Management or Compass.
April 5, 2027: End of Life. The service shuts down completely. No more alerts. No more on-call schedules. No access to historical data. If you haven't migrated by this date, your incident management stops working.
Between now and April 2027, Atlassian provides support for critical bugs, but they stopped feature development. You're maintaining legacy software with a ticking clock.
Contract renewal strategy. If your Opsgenie contract renews before April 2027, negotiate a shorter renewal period to avoid paying for time you won't use, or accept the renewal and use the buffer time to execute a careful migration. Do not wait until the last six months. Migration under time pressure leads to mistakes.
You have three primary paths forward. The right choice depends on how fast you need to deploy, whether you want to modernize or replicate your current setup, and your tolerance for hidden costs.
Atlassian provides an automated migration tool that moves alerts, schedules, and policies from Opsgenie to JSM. This is the path of least contract friction if you're already an Atlassian shop.
What migrates automatically: Users, teams, on-call schedules, and escalation policies sync automatically. Most integrations migrate, but alert policies migrate while action policies do not and must be recreated manually.
What does not migrate: Chat integrations for incidents require manual reconnection. Atlassian deprecated the Incident Command Center feature. The migration tool skips heartbeats without assigned teams because Atlassian considers them outdated.
Setup time: Expect significant configuration effort. JSM is a service desk platform first, incident management second. Complex implementations can take 2-3 months depending on your workflow requirements.
Pricing: JSM uses agent-based licensing at approximately $20-21 per agent per month for the Standard plan. Budget for professional services to configure workflows. Atlassian Solution Partners charge around $160/hour.
Best for: Teams already deeply invested in Atlassian products (Jira, Confluence, Bitbucket) who want to stay in that ecosystem and can absorb the setup complexity.
Not for: Teams who need fast deployment, Slack-native workflows, or transparent pricing.
PagerDuty is the established incumbent which comes with both upsides and downsides.
What you get: Alerting and routing, mature mobile apps, deep customization for complex escalation policies, and proven reliability.
What you pay for: The Professional plan is $21 per user per month when billed annually. The Business plan is $41 per user per month. PagerDuty paywalls critical features - noise reduction costs extra, AI features cost extra, advanced runbooks cost extra. Total costs can reach $119,000+ annually for larger teams with add-ons.
Best for: Enterprises needing maximum alerting flexibility, teams with dedicated PagerDuty admins, or organizations where cost is not the primary concern.
Not for: Teams looking to reduce tool sprawl. PagerDuty handles alerting well but requires separate tools for status pages, post-mortems, and timeline capture.
We built incident.io as a Slack-native incident management platform for teams who coordinate incidents in chat, not browser tabs. We cover on-call scheduling, incident response, status pages, and post-mortems in one platform.
What you get: The entire incident lifecycle runs in Slack. When Datadog fires an alert, we auto-create #inc-2847-api-latency, page the on-call engineer, pull in service owners, and capture the timeline automatically. Engineers manage incidents with slash commands (/inc escalate, /inc severity high). Our AI SRE automates up to 80% of incident response and identifies root causes.
What you pay: Our Pro plan is $45/user/month all-in with on-call ($25 base + $20 on-call). For 120 users, that's $64,800 per year. No hidden add-ons. No feature gating. Pro includes AI-powered post-mortem generation, unlimited workflows, custom incident types, and Microsoft Teams and Slack support.
"Too many to list - it's a one stop shop for incident management (not just on call rotations like many competitors. Built in and custom automations, great slack integration, automated post mortem generation, jira ticket creation, followup and actions creation..." - Verified incident.io User in Real Estate
Best for: Teams who live in Slack, want fast deployment, and value support velocity (bugs fixed in hours, not weeks). Teams tired of tool sprawl who want on-call, response, status pages, and post-mortems unified.
Not for: Teams requiring deep microservice SLO tracking (use Blameless instead) or "customize everything" flexibility (PagerDuty offers more knobs).
| Dimension | Jira Service Management | PagerDuty | incident.io |
|---|---|---|---|
| Typical Setup Time | 8-12 weeks | 4-6 weeks | 3-5 days |
| Training Required | High (service desk UI) | Medium (web UI + mobile) | Low (Slack-native) |
| Automated Migration Tool | Yes (from Opsgenie only) | No (manual export/import) | Yes (direct Opsgenie integration) |
| Support Model | Email ticketing | Online portal + email | Shared Slack channels |
A zero-downtime migration requires four phases: audit and cleanup, mapping, parallel run, and cutover. This playbook works regardless of which platform you choose. Each phase includes specific tasks, API commands, and validation checks.
Do not migrate garbage. Clean up your Opsgenie configuration before exporting.
1. Audit active users and remove inactive accounts. Export your user list via the Opsgenie List Users API. Identify inactive accounts. Remove users who left the company. Verify contact methods (email, SMS, phone) are current.
2. Review on-call schedules and fix gaps. Export schedules in iCalendar format or via the Schedule API. Identify schedules with no assigned team members, overlapping schedules, or rotations that no longer match team structure. Fix these now.
3. Document routing rules and delete dead rules. Opsgenie routing rules determine which team gets paged based on alert tags. List every rule. Document the logic. Identify rules that route to teams that no longer exist or to services that were decommissioned.
4. Inventory integrations. List every integration sending alerts to Opsgenie: Datadog, Prometheus, New Relic, CloudWatch, custom webhooks. Document the alert payload format. You'll need to reconfigure these in the new platform.
5. Archive historical incident data. The Opsgenie Incident API does not offer bulk export via UI. Use a script to paginate through incident history. Export to JSON or CSV. Store in S3 or Google Cloud Storage for long-term retention.
Build a user mapping table with three columns: Opsgenie username, email address, and target platform username. If you use different usernames across systems, this mapping prevents paging the wrong person.
Recreate escalation policies. Escalation policies define "if primary on-call does not acknowledge in 5 minutes, escalate to secondary." Export escalation policies from Opsgenie. Recreate them in the new platform. Test each policy with a mock alert.
Map alert severities. Opsgenie uses P1-P5 severity levels. Your new platform may use Critical/High/Medium/Low. Document the mapping. Update alert rules in monitoring tools to use the new severity labels.
Recreate routing rules. Export routing rules from Opsgenie. Recreate them in the new platform. Test with sample alerts. Verify the correct team is paged.
The parallel run is the key to zero downtime. Run both Opsgenie and your new platform simultaneously. If the new platform fails, Opsgenie catches it.
Update your monitoring tools (Datadog, Prometheus, New Relic) to send alerts to both Opsgenie and the new platform. Most monitoring tools support multiple notification endpoints. If yours doesn't, use a webhook relay service or a Lambda function to fan out alerts.
For the first week, the new platform pages responders, but the team continues to acknowledge alerts in Opsgenie. This validates that the new platform is paging the correct person.
For week two, the new platform becomes the primary on-call system. Engineers acknowledge alerts in the new platform. Opsgenie becomes the backup. If an alert fails in the new platform, Opsgenie still pages the team.
Track three metrics during the parallel run: alerts missed (alert fired but no page sent), alerts to wrong team (routing rule misconfigured), and acknowledgment latency (how long until responder sees the alert). Fix issues immediately.
Before ending the parallel run, simulate a P1 incident. Trigger a test alert. Watch the new platform create the incident channel, page on-call, escalate if needed, and capture the timeline.
Give the team one week notice. Send a Slack message and an email. Explain what changes: where to acknowledge alerts, how to declare incidents, and where to view on-call schedules.
On the cutover date, remove Opsgenie from your monitoring tool's notification list. The new platform is now the only alerting system.
Keep Opsgenie read-only for 30 days. Do not delete your Opsgenie account immediately. If you need to reference an old escalation policy or historical incident, the data is still accessible.
After 30 days with zero issues in the new platform, cancel your Opsgenie subscription. Export one final backup of all data. Store it in long-term archival storage.
You have three options for exporting Opsgenie data. Use the REST API if you need maximum control and custom formatting. Use open-source CLI tools if you're consolidating multiple Opsgenie accounts. Use vendor importers if you want automated migration with minimal scripting.
The Opsgenie REST API is the most flexible export method. You can extract exactly the data you need in the format you need.
Key API endpoints:
GET /v2/users lists all users. GET /v2/users/:identifier fetches detailed information for a specific user, including contact methods.GET /v2/teams lists all teams.GET /v2/schedules lists all on-call schedules.GET /v2/escalations lists all escalation policies.GET /v2/incidents lists incidents. This endpoint uses pagination with a limit of 100 items per request. Use the paging.next URL in the response to retrieve the next page.Authentication: All API requests require an Authorization header: Authorization: GenieKey {your_api_key}.
Rate limiting: The API enforces rate limits. If you receive a 429 status code, pause your script and retry.
We built an Opsgenie integration that automates migration of schedules. Provide your Opsgenie API key. Our integration fetches users, teams, and schedules. We create on-call schedules in incident.io. Escalation policies need to be manually recreated in our escalation paths.
What requires manual configuration: custom integrations, alert routing rules (these depend on your monitoring tool setup), and any Opsgenie-specific features like action policies.
Migration fails if engineers don't adopt the new tool. Two factors drive adoption: ease of use and clear communication.
Day 1: Kickoff and guided walkthrough. Host a 30-minute session. Show the team how to view the on-call schedule, acknowledge an alert, and declare an incident. If you're migrating to incident.io, demonstrate slash commands in Slack.
Day 2: Hands-on practice. Give engineers sandbox access. Have them trigger test alerts. Walk through acknowledging alerts, reassigning incidents, and updating severity.
Day 3: Shadow the first real incident. When the first real alert fires, have a migration champion shadow the responder. Answer questions in real-time. Document any friction. Fix it immediately.
Engineers learn by doing, not by reading documentation. Three days of hands-on practice with real incidents builds muscle memory. After 3-4 incidents, the new tool becomes second nature.
"The onboarding experience was outstanding — we have a small engineering team (~15 people) and the integration with our existing tools (Linear, Google, New Relic, Notion) was seamless and fast less than 20 days to rollout. The user experience is polished..." - Bruno D on G2, an incident.io user
Routing rules are the most common migration failure point. Test every rule before cutover.
Build a test alert matrix. List every combination of service, environment, and severity that should trigger an alert. Example: service:api, env:production, severity:critical should page the API on-call team.
Use your monitoring tool's test alert feature or a webhook testing tool like Postman. Send each test alert to the new platform. Verify you page the correct team.
Set escalation policies to short timers (1 minute instead of 5 minutes) during testing. Trigger a test alert. Do not acknowledge it. Verify the secondary on-call is paged after 1 minute.
Test mobile notifications on iOS and Android. Verify push notifications, SMS, and phone calls all work. Test with the device in silent mode, do not disturb mode, and airplane mode.
Total cost of ownership (TCO) includes more than the subscription fee. Factor in implementation (engineer hours spent configuring vs. building features), training (hours to onboard each engineer), integration maintenance (hours spent fixing broken webhooks), and opportunity cost (revenue impact of slower MTTR during migration).
PagerDuty's cost structure:
The Professional plan costs $21 per user per month when billed annually. The Business plan is $41 per user per month. PagerDuty paywalls critical features. Want AI-powered root cause analysis? Upgrade to the Professional or Business plan. Event intelligence to reduce alert noise is an add-on starting at $699/month. Total costs can exceed $119,000 annually for teams of 100+ with add-ons.
Jira Service Management's cost structure:
JSM uses agent-based licensing at approximately $20-21 per agent per month for the Standard plan or $47-48 per agent per month for the Premium plan. JSM requires significant configuration. Budget for professional services as Atlassian partners typically charge around $160/hour. Complex implementations can take 2-3 months so this is likely to be your biggest cost.
incident.io transparent pricing:
Our Pro plan costs $45/user/month all-in with on-call ($25 base + $20 on-call). For 120 users, that's $64,800 per year. No hidden fees. No feature gating. AI SRE and automated post-mortems are included. Our setup is faster so you spend days on migration, not months. At $150 loaded hourly rate, that's significant engineer time saved.
"Incidentio has always been very responsive to our requests, open to feedback and quick reaction. It didn't happen only once when we reported some type of issue in the process to Incident.io support and in matter of hours the fix was released." - Gustavo A on G2, senior SRE and user of incident.io
Our support runs through shared Slack channels. Bugs are fixed in hours, not weeks. We know how important quick answers are when your company is relying on you.
The Opsgenie sunset is a forced migration, but it's also an opportunity to modernize your incident management stack. Whether you migrate to JSM, PagerDuty, or a modern alternative like incident.io, start planning now and execute the parallel run with discipline. April 2027 will arrive faster than you think. Book a demo with the team and we'll show you the platform.
Opsgenie Sunset: The phased discontinuation of Atlassian's standalone Opsgenie product, with end of sale on June 4, 2025, and end of life on April 5, 2027.
Parallel Run: A migration strategy where both the old and new incident management platforms receive alerts simultaneously, allowing validation of the new system without risking missed pages.
Routing Rules: Logic that determines which team or individual receives a page based on alert attributes like service, environment, severity, or custom tags.
Escalation Policy: The sequence of notifications sent if the primary on-call responder does not acknowledge an alert within a defined time period, typically escalating to secondary and tertiary responders.
TCO (Total Cost of Ownership): The comprehensive cost of a platform over a defined period, including subscription fees, implementation, training, integration maintenance, and opportunity cost of engineering time.
API Rate Limiting: A mechanism that restricts the number of API requests a client can make within a time period, typically enforced by returning a 429 HTTP status code when the limit is exceeded.
Slack-Native: An incident management architecture where the primary user interface and workflow exist inside Slack or Microsoft Teams, not in a separate web application with chat notifications.
Zero-Downtime Migration: A migration strategy that ensures continuous on-call coverage and alert delivery throughout the transition by maintaining operational systems in parallel until the new platform is fully validated.

Ready for modern incident management? Book a call with one of our experts today.
