Updated December 31, 2025
TL;DR: Atlassian is shutting down Opsgenie by April 5, 2027. Migrating your Datadog, Prometheus, AWS CloudWatch, and Grafana alerts to incident.io takes days, not months. Use a 14-day parallel run where both systems receive alerts simultaneously to validate routing rules without risk. Generate new webhook URLs in incident.io, configure dual-send from your monitoring tools, test escalation paths, then cut over. This guide covers webhook mapping, payload transformation, routing rule translation, and a validation checklist to ensure zero alert loss during the switch.
Atlassian set a hard deadline in the Opsgenie sunset timeline: sales ended June 4, 2025, and the platform shuts down April 5, 2027. You have a forced migration ahead, but the real question is not whether to migrate but where and how to do it without dropping alerts.
The migration looks daunting at first. Datadog fires alerts into Opsgenie, Prometheus routes through Alertmanager, AWS CloudWatch pushes SNS notifications, Grafana dashboards wire into on-call schedules. Rewiring all of this feels like changing engines mid-flight. But modern alert integrations are mostly webhooks with JSON payloads. If you migrate strategically using a parallel-run approach, you validate your new setup while keeping Opsgenie as a safety net. No downtime. No missed pages. No 3 AM surprises.
This guide walks through the technical specifics of reconnecting your monitoring stack to incident.io: webhook URL generation, payload mapping, routing rule translation, and a step-by-step validation checklist.
Atlassian has set clear deadlines in their shutdown announcement: new Opsgenie purchases stopped June 4, 2025, and the platform shuts down entirely April 5, 2027. Existing customers can renew subscriptions and add users until the final shutdown, but Atlassian actively pushes teams toward Jira Service Management or Compass.
The problem with the JSM migration path is that it is service-desk-first, not purpose-built for real-time incident response. Complex JSM implementations often take months to configure, and the platform lacks the Slack-native coordination that modern SRE teams expect.
We built incident.io as a Slack-native alternative that unifies on-call scheduling, alert routing, incident response, and post-mortem generation in one platform. You get the full incident lifecycle in Slack, not just alert notifications. The migration from Opsgenie to incident.io typically takes days, not months, because the core alert integrations are straightforward webhook configurations.
The safest way to migrate alert integrations is to run both systems simultaneously for a validation period. We recommend between one day and two weeks, depending on your incident volume and team size.
Here is how the parallel run works in practice:
Before disabling Opsgenie completely, confirm these four checkpoints:
Datadog is the most common monitoring tool integrated with Opsgenie. We built native Datadog integration that auto-parses alert payloads to populate incident details, so you spend zero time mapping fields manually.
Configuration Steps:
@webhook-incidentio @opsgenie-api-integration (during parallel run).$EVENT_TITLE → Incident title$ALERT_STATUS → Incident status (firing/resolved)$HOSTNAME and $TAGS → Custom attributes for routingWatch our full platform demo to see how alerts flow from Datadog into incident.io and trigger Slack channels automatically.
Prometheus uses Alertmanager to route alerts. We built a native Alertmanager notifier that simplifies the integration to a few lines in your alertmanager.yml config.
Configuration Steps:
alertmanager.ymlreceivers:
- name: 'incidentio-notifications'
incidentio_configs:
- url: '<your-incident-io-webhook-url>'
alert_source_token: '<your-alert-source-token>'
route section to direct alerts to the new receiver:route:
receiver: 'incidentio-notifications'
routes:
- match:
severity: 'critical'
receiver: 'incidentio-notifications'
kill -HUP $(pidof alertmanager)
If you are new to the Prometheus and Grafana architecture, this five-minute video breakdown shows how alerts flow from Prometheus through Alertmanager to external systems.
AWS CloudWatch uses SNS (Simple Notification Service) to push alerts to external systems. The migration involves creating an SNS topic that forwards to an incident.io webhook.
Configuration Steps:
ALARM and OK states.We published a recap of AWS re:Invent 2025 observability sessions that covers modern alerting patterns for CloudWatch, EventBridge, and SNS.
Grafana's alerting system can send notifications to external webhooks. The setup is similar to other integrations.
Configuration Steps:
severity=critical).If you have in-house monitoring tools or custom scripts that currently send alerts to Opsgenie, you can configure them to send to incident.io using the generic HTTP alert source.
Configuration Process:
Authorization header with your token: Bearer <your-token>.Content-Type: application/json.{
"title": "Database connection pool exhaustion on prod-db-01",
"status": "firing",
"deduplication_key": "prod-db-01-connection-pool",
"description": "Connection pool utilization at 98% for 5 minutes",
"metadata": {
"host": "prod-db-01",
"service": "postgres",
"team": "platform"
}
}
For additional examples of custom webhook integrations, see the groundcover documentation for payload structures and error handling patterns.
Opsgenie routing rules and escalation policies need to be manually recreated in incident.io. While this requires some effort, it is also an opportunity to clean up outdated routing logic and eliminate "zombie alerts" that nobody acts on.
Here is how Opsgenie concepts map to incident.io:
| Opsgenie Term | incident.io Equivalent | Key Difference |
|---|---|---|
| Team | Team | Direct 1:1 mapping. Import your Opsgenie teams using the automated importer. |
| Routing Rule | Alert Route | Our routing is more flexible, supporting catalog-based routing (by service, team, or feature). |
| Escalation Policy | Escalation Path | We support priority-based escalation and working-hours awareness. Must be recreated manually. |
| Alert Deduplication | Alert Grouping | We use deduplication_key in payloads and intelligent grouping logic to bundle related alerts. |
We built an automated schedule importer that handles user accounts and on-call rotations from Opsgenie, so you do not spend days manually recreating shifts.
Import Process:
GET /v2/schedules.GET /v2/users.GET /v2/teams.Read our guide on designing smarter on-call schedules to learn how to reduce burnout and improve response times once your migration is complete.
Opsgenie's team-based routing can be replicated using our alert routes, but you can also take advantage of our catalog-based routing to make rules more dynamic.
Example translation:
Opsgenie Rule: "If an alert has the tag service:api and priority is P1, escalate to the API On-Call schedule."
incident.io configuration:
service in your alert source.service equals api and priority equals P1.Watch our on-call setup walkthrough to see how to configure escalation paths, notification urgency, and time-to-acknowledge thresholds.
When migrating from Opsgenie, you will need to export historical data for compliance or post-mortem reference.
These are required for your incident.io to function correctly:
Priority 1 (Required for operations):
GET /v2/users)GET /v2/teams)GET /v2/schedules)GET /v2/escalations)Priority 2 (Useful for historical analysis):
GET /v2/incidents) - Export the last 12 months for trending analysis.Priority 3 (Nice to have):
Our full Opsgenie migration playbook includes API export scripts and guidance on what data to preserve vs. what to leave behind.
The technical migration is only half the challenge. Your team needs to adopt the new system without confusion or resistance.
We designed incident.io to be intuitive, but your team still needs to understand the new workflow.
Week 1: Announce and demonstrate
/inc commands in Slack.Week 2: Shadow mode
/inc declare to get comfortable with the workflow.Week 3: Go live
Week 4: Decommission
Read our 2025 guide to preventing alert fatigue to learn how to reduce noise and improve on-call experience after your migration.
| Feature | incident.io | Jira Service Management | PagerDuty |
|---|---|---|---|
| Setup Time | Days | Months for complex setups | Varies by configuration |
| Cost (100 users) | $4,500/month (Pro + On-call) | $2,000-2,100/month (Standard) | Typically higher with add-ons |
| Slack-Native | Yes, entire workflow in Slack | No, service desk first | Notifications only |
| Migration Support | Automated importer for schedules | Automated tool from Atlassian | Manual configuration |
We published a detailed pricing and ROI comparison showing the total cost of ownership for incident.io vs. PagerDuty vs. Opsgenie over 12 months, including hidden costs like per-incident fees and add-on charges.
Symptom: Monitoring tool reports webhook failures with HTTP 401 status.
Root Cause: Invalid or expired API token in the Authorization header.
Resolution:
Authorization: Bearer <your-token>.Symptom: Webhooks rejected with HTTP 422 and error message {"error":"invalid_payload"}.
Root Cause: JSON payload does not match our expected schema.
Resolution:
title, status, deduplication_key).Symptom: Alerts appear in our Alerts tab but do not trigger the correct escalation path.
Root Cause: Alert route conditions do not match the incoming alert's attributes.
Resolution:
Watch our AI-powered alert grouping demo to see how we reduce noise by intelligently bundling related alerts based on service, timing, and common attributes.
Migrating alert integrations from Opsgenie to incident.io is not a high-risk operation when you use a parallel-run strategy. By running both systems for 14 days, you validate routing rules, test escalation paths, and ensure zero alert loss before cutting over. The technical work—reconnecting Datadog, Prometheus, AWS CloudWatch, and Grafana—is mostly webhook configuration and JSON payload mapping. The harder part is getting your team comfortable with the new system, which is why the parallel run is so valuable. Your engineers explore incident.io during lower-severity incidents while knowing Opsgenie is still running as a backup.
If you are facing the Opsgenie sunset deadline, now is the time to start planning your migration. Book a demo to see our platform in action and get a customized migration plan for your team.
Alert Route: A configuration in incident.io that determines where an incoming alert should be sent based on its attributes (severity, service, team, etc.). Equivalent to Opsgenie's routing rules but with more flexible catalog-based routing options.
Escalation Path: A multi-level notification sequence that defines who gets paged and when if an alert is not acknowledged. Equivalent to Opsgenie's escalation policies but with more granular control over notification urgency.
Deduplication Key: A unique identifier in an alert payload that prevents duplicate incidents from being created for the same underlying issue. We use this to group related alerts.
Parallel Run: A migration strategy where both the old and new systems operate simultaneously for a validation period, allowing teams to verify the new setup without risk of missed alerts.
Webhook: An HTTP endpoint that receives JSON payloads from monitoring tools when alerts fire. We generate unique webhook URLs for each alert source.
Alert Grouping: The process of bundling related alerts together to reduce noise and prevent alert fatigue. We automatically group alerts based on deduplication keys and service metadata.
Service Catalog: A centralized inventory of services, systems, and features in incident.io that enables dynamic routing based on service ownership rather than static team assignments.
Time to Acknowledge (TTA): The maximum time an on-call engineer has to acknowledge an alert before it escalates to the next level of the escalation path.

Ready for modern incident management? Book a call with one of our experts today.
