# Opsgenie vs. JSM RFP template: Questions to ask before buying

*March 13, 2026*

_Updated March 1, 2026_

> **TL;DR** Atlassian sunsets standalone Opsgenie on April 5, 2027, and new purchases stopped June 4, 2025. JSM Premium costs $51.42/user/month and requires significant configuration before your team handles a real incident. Before defaulting to JSM, use this RFP template to expose hidden TCO, Slack integration gaps, and AI capabilities that will determine whether your next tool cuts MTTR or just adds another portal to manage. We built incident.io as a Slack-native alternative covering on-call, response, and post-mortems in one platform, with Pro plan response and AI plus on-call scheduling as an add-on. Try these questions against any vendor, including us.

Atlassian sunsets the standalone Opsgenie experience on April 5, 2027. New Opsgenie purchases stopped on June 4, 2025. If you're currently on Opsgenie, you have a forced decision, but the question isn't whether you move. The question is whether you use this moment to improve your incident management stack or just swap one legacy tool for another. Most teams will default to JSM because Atlassian makes it the path of least resistance. We wrote this guide to make sure you don't do that without asking the right questions first.

## Why the Opsgenie sunset forces a full vendor evaluation

The Atlassian migration path points you toward Jira Service Management (JSM). Moving to JSM isn't a continuation of what you had. It's a new purchase decision with new pricing, new configuration overhead, and a fundamentally different product philosophy, which means you need to evaluate it with the same rigor you'd apply to any new vendor.

**Three reasons this matters:**

1. **Architectural mismatch:** JSM is an IT service management (ITSM) platform. Ticketing is its native language, and Atlassian layered incident alerting and on-call scheduling on top by absorbing Opsgenie's functionality. If your team runs SRE workflows, resolves 15-25 production incidents a month, and lives in Slack, that architectural origin creates friction you'll feel in every incident.
2. **Documented regressions:** [Migration pain reports from InfoQ](https://www.infoq.com/news/2025/03/atlassian-opsgenie-consolidation/) highlight common JSM problems for SRE teams: noisy Slack alerts that overwhelm channels, loss of granular on-call permissions, and no combined workflows between JSM tickets and Jira development stories.
3. **Strategic opportunity:** This is your chance to audit whether the "alerting + ticketing in one suite" model serves your SRE team, or whether a purpose-built incident coordination platform would cut your MTTR faster. The DevOps community has taken notice, with well-funded engineering teams increasingly moving to incident-specific platforms rather than consolidating into broad ITSM suites.

Use the evaluation process below to test that hypothesis against your own data.

## Key evaluation areas: The questions that matter

Move beyond "Does it have on-call?" to questions that expose friction under real pressure. How does on-call work when your junior engineer gets paged at 2 AM for the first time? Feature checklists won't reveal the problems that make tools fail under pressure. These questions will.

### Automation and AI capabilities

Most incident management vendors now claim AI, but the difference between useful AI and expensive hallucination comes down to grounding: does the AI reason from your actual incident data, captured timelines, and past post-mortems, or does it wrap a generic LLM around your logs and guess?

**Questions to ask every vendor:**

1. What data sources does your AI use to identify root causes? Can you show documented precision and recall metrics?
2. Does the AI auto-draft post-mortems from timeline data captured during the incident, or does it generate summaries from scratch after the fact?
3. Can workflows auto-trigger based on alert payloads? For example: if a Datadog alert contains `severity=critical` and `service=payments`, does an incident auto-open with the payments team paged?
4. What percentage of the incident response process can your AI automate end-to-end?
5. How does your AI handle novel incidents with no historical precedent?

We built incident.io's AI SRE to use the live incident timeline, not post-hoc log scraping, to generate summaries and post-mortem drafts. The [AI Slack assistant](https://docs.incident.io/ai/slack-assistant) lets responders draft updates, create follow-ups, and manage incident state directly in Slack without leaving the channel.

JSM's Premium tier offers AI-powered alert grouping and post-incident review generation through Rovo Agents (Atlassian's AI agent framework embedded in JSM), but that tier costs $51.42/user/month and requires significant workflow configuration before AI features activate effectively.

### Integration with your existing stack

The critical distinction is between a native integration and a notification pipe. A notification pipe pushes alerts into Slack. A native integration lets you run the full incident lifecycle inside Slack, with bi-directional sync, without ever opening a browser tab.

**Questions to ask every vendor:**

1. Is your Slack integration bi-directional? Can I page teams, change severity, assign roles, and resolve incidents entirely via Slack commands, or do I need to leave Slack for any of those actions?
2. Do I need middleware, custom webhooks, or third-party connectors to link Datadog alerts to incident creation?
3. What happens to the Slack integration if Slack changes their API? Who maintains compatibility?
4. Can I sync follow-up tasks back to Jira automatically with incident context attached, or do I need to manually copy data between tools?
5. How does your platform integrate with our service catalog so the right team gets paged based on alert ownership?

JSM's ChatOps integration for Slack provides notification delivery and basic actions via the `/jsmops` command, but most incident management tasks still require navigating the JSM web portal. incident.io is [available natively in Slack](https://slack.com/marketplace/A01DEGPUHHC-incidentio) and provides full bi-directional control. The `/inc` and `/incident` slash commands cover every incident action from declaration to resolution without context switching. You can also [sync follow-up actions to Jira](https://docs.incident.io/admin/jira-sync) with incident timeline context attached automatically, and route alerts to the right team using [service-based escalation paths](https://docs.incident.io/alerts/team-routing).

> "incident.io brings calm to chaos... incident.io is now the backbone of our response, making communication easier, the delegation of roles and responsibilities extremely clear, and follow-ups accounted for. It's the source of truth for incidents we've always needed." - [Braedon G. on G2](https://g2.com/products/incident-io/reviews/incident-io-review-7547419)

### Security, compliance, and reliability

Your CISO will block this purchase if the vendor can't produce documentation on demand. Collect this before procurement, not after.

**Questions to ask every vendor:**

1. Can you provide your SOC 2 Type II report today, not "after we sign an NDA"?
2. Where is incident data hosted? Do you offer EU data residency for GDPR compliance?
3. What is the uptime SLA, and does a financial service credit back it?
4. Do you support SAML/SCIM for SSO and automated user provisioning?
5. What encryption standards are in use for data at rest and in transit?
6. How are API tokens and webhook credentials stored and rotated?

incident.io holds [SOC 2 Type II certification](https://incident.io/blog/weve-successfully-completed-our-soc2-audit), maintains GDPR compliance, and backs a 99.99% uptime SLA with financial service credits. Full security documentation is available on the [incident.io security page](https://incident.io/security). Atlassian carries comprehensive enterprise certifications for JSM including SOC 2 and ISO 27001, with a financially backed 99.9% uptime SLA and 24/7 Premium support with a 1-hour response time for critical issues.

### Vendor support and SLAs

When production is down at 11 PM, "we'll get back to you in 2 business days" is a liability, not a support SLA. Ask specific questions and get specific answers in writing.

**Questions to ask every vendor:**

1. Is there a shared Slack channel with your engineering team during our trial and onboarding?
2. What is your median first-response time for P1 bugs, not your contractual SLA target?
3. Who responds to support tickets: a tier-1 support queue or engineers who built the product?
4. Can you point to documented examples of feature requests shipped within days of customer feedback?
5. What happens to support access if we're on a smaller plan? Does response quality degrade?

Our G2 reviews show a pattern of engineering-to-engineering responsiveness that matters when your incident tool is itself failing during an incident:

### Total cost of ownership (TCO)

License fees are the starting point, not the total cost. The real number includes implementation time, ongoing admin overhead, and the productivity tax of using a tool that adds friction to every incident.

**Questions to ask every vendor:**

1. Is on-call scheduling included in the base license, or is it a separate add-on?
2. Are there per-seat fees for stakeholders who only need to view incident status?
3. What is the realistic implementation timeline before your team handles a real incident through the platform?
4. How many engineer-hours per month does ongoing administration require, and who owns that work?
5. What is the fully-loaded cost for a 50-person team over 12 months, including implementation and admin?

For JSM, full on-call scheduling and advanced incident management require the Premium tier at approximately $51 to $53 per user per month. With tiered pricing that varies by team size, per-agent rates decrease as headcount grows, meaning the annual license cost for a 50-person team will depend on which tier applies, but at $51 to $53 per agent per month as a reference range, the figure adds up quickly before accounting for any implementation work. Per the [InfoQ analysis of the Opsgenie migration](https://www.infoq.com/news/2025/03/atlassian-opsgenie-consolidation/), full ITSM implementation often takes significantly longer than basic ticketing setup, and many organizations bring in certified partners to complete it. Beyond implementation, someone on your team will own ongoing workflow configuration, routing rules, and escalation policy maintenance across every service change, team restructure, and API update Atlassian ships.

Our Pro plan's opinionated defaults mean teams can [run incidents in Slack](https://incident.io/respond) from day one, not months in. For teams moving from Opsgenie, incident.io provides [dedicated migration tooling](https://docs.incident.io/getting-started/migrate-from-opsgenie) to import escalation policies and on-call schedules, which cuts migration overhead significantly. The Pro plan starts at $25 per user per month and includes AI-powered post-mortems and unlimited workflows. On-call management is purchased separately as a distinct add-on at $20 per user per month, bringing the total to $45 per user per month for teams that need both capabilities.

## Opsgenie vs. JSM vs. incident.io comparison

| Criteria | Opsgenie | JSM | incident.io |
| --- | --- | --- | --- |
| Status | End of support April 2027 | Active (Atlassian consolidation target) | Active, 600+ customers |
| Setup time | Days | Weeks to months for full ITSM | Minutes to hours |
| Slack maturity | Notification-based | Notification + limited commands | Fully bi-directional, native ChatOps |
| On-call cost | Included (legacy) | Requires Premium ($51.42/user/mo) | Pro $25/user/mo + on-call add-on $20/user/mo |
| Post-mortem automation | Manual, basic templates | AI-assisted (Premium + Rovo) | AI-drafted from live timeline |
| Primary workflow | Alert-based workflow | Jira-based portal | Slack-native, /inc commands |
| AI capabilities | Limited | Alert grouping, Rovo Agents (Premium) | AI SRE, timeline-grounded summaries |
| Best for | Legacy alerting (sunsetting) | Enterprise IT service desks | Engineering-led SRE incident response |

For small and scaling engineering teams, JSM's ITSM depth creates more surface area to configure and maintain than most SRE teams need. Atlassian absorbs the standalone Opsgenie experience into JSM, so staying on it is not an option. If you run microservices on Kubernetes and coordinate primarily in Slack, JSM's portal-first workflow adds the context-switching overhead you're trying to eliminate. The [2026 incident response tool landscape](https://incident.io/blog/best-incident-response-tools-2026) shows many engineering teams adopting Slack-native platforms to reduce coordination overhead.

## RFP checklist for incident management

Copy this into a Google Sheet or Notion doc. Send it to every vendor. Score responses from 1-5 and weight them against the criteria you care about most.

**General and vendor viability:**

1. How many engineering teams are actively using your platform today?
2. What is your annual recurring revenue and employee count? (Assess vendor stability.)
3. Can you provide three reference customers with similar team size and stack for direct calls?
4. What is your product release cadence? Can I see a public changelog?
5. What is your data deletion policy if we cancel, and how long is the offboarding period?

**Incident response workflow: the 3 AM test (i.e., how the platform performs under real, out-of-hours production pressure):**

1. Walk me through the exact steps from a Datadog alert firing to a Slack channel being created. How many are manual?
2. Can I declare, escalate, assign roles, and resolve an incident without leaving Slack? Show me.
3. How does the platform handle a P1 escalation when the primary on-call engineer doesn't acknowledge within 5 minutes?
4. What is the onboarding time for a new junior engineer joining on-call? How many steps before their first solo incident?
5. Does the platform capture a timeline automatically during the incident, or do engineers need to log updates manually?

**On-call management:**

1. Is on-call scheduling included in the base license or a paid add-on?
2. How do I configure override schedules for holidays or team absences?
3. Can I route alerts to specific teams based on the alert payload content?
4. What notification channels are supported? (Slack, phone, SMS, email, mobile push.)
5. How does the platform handle multi-team escalations where both the backend and database teams need paging simultaneously?

**Post-incident analysis:**

1. Does the AI use live timeline data captured during the incident, or post-hoc log analysis?
2. How long does it take to produce a publishable post-mortem draft after incident resolution?
3. Can the platform identify patterns across incidents automatically? For example: "Database incidents are up 40% this quarter."
4. How are follow-up action items created and tracked? Do they sync to Jira or Linear automatically?
5. Can I search across all historical post-mortems for a specific term or service?

**Platform and security:**

1. Provide your SOC 2 Type II report. Is it available today without an NDA?
2. Where is data hosted? Is EU data residency available?
3. Do you support SAML/SCIM for SSO and automated user provisioning?
4. What is your uptime SLA? Does a financial service credit back it?
5. How are webhook credentials and API tokens stored, scoped, and rotated?

## Making the business case to your CTO

When you present this to your VP Engineering or CTO, lead with the number that settles the conversation: MTTR multiplied by loaded engineer cost.

**The math your CTO needs to see:**

If your team handles 18 incidents/month at a median MTTR of 48 minutes, and 15 of those minutes are pure coordination overhead (assembling the team, opening tools, finding context), then 270 minutes per month disappear before anyone starts troubleshooting. At $150 loaded engineer cost per hour, that's $675/month or $8,100/year burned on logistics alone. Eliminating coordination overhead converts directly to product velocity. Frame it this way: "We're not buying a tool. We're reclaiming 3,240 engineering minutes per year and reducing customer-facing downtime."

**The Jira admin tax JSM doesn't advertise:**

Someone owns the configuration of your JSM workflows, routing rules, and escalation policies. That person spends time maintaining them every time your team structure changes, every time you add a new service, and every time Atlassian releases a breaking change to the API. Before you commit to JSM, get a clear answer on who owns that ongoing configuration burden and how many hours per month it realistically costs.

**The case for a specialized platform:**

JSM optimizes for ITSM breadth. incident.io optimizes for engineering incident velocity. These are different products solving different problems, and if your CTO asks "why can't we just use JSM?", the honest answer is: you can, but you'll spend significant time configuring it to approximate what a purpose-built platform ships on day one. The [ITSM and DevOps integration guide](https://incident.io/blog/itsm-devops-integration-guide-2026) goes deeper on where these tools complement vs. compete. Teams can use JSM for IT service ticketing and incident.io for engineering incident response, with a Jira sync connecting the two.

> "We've been using incident.io for some time now, and it's been a game-changer for our incident management processes... With seamless integrations into Slack, Jira, and Confluence, it has become our go-to for bringing teams together to tackle incidents faster and more efficiently." - [Patrik A. on G2](https://g2.com/products/incident-io/reviews/incident-io-review-10325562)

For teams ready to apply these RFP questions against a real platform, [schedule a demo](https://incident.io/demo) and we'll show you the numbers against your current setup. You can also watch the [Beyond the Pager webinar](https://incident.io/webinar-beyond-the-pager) where we walk through the Opsgenie sunset options with SRE teams navigating the same decision.

## Key terminology

**MTTR (Mean Time To Resolution):** The average time from incident declaration to full resolution. This is the primary metric SRE teams use to measure incident response effectiveness and the ROI of tooling changes.

**ITSM (IT Service Management):** A framework for managing IT services, covering ticketing, change management, and asset management. JSM is an ITSM platform where incident response is one feature in a much broader suite.

**ChatOps:** Running operations workflows, including incident management, directly inside a chat platform like Slack using slash commands and bots rather than switching to a separate web portal.

**On-call rotation:** A schedule where engineers take turns as the primary responder for production alerts outside business hours. On-call management includes scheduling, escalation rules, override policies, and notification routing.

**Post-mortem:** A structured document written after an incident that captures the timeline, root cause, contributing factors, and follow-up actions. Blameless post-mortems focus on system failures rather than individual mistakes.

**SOC 2 Type II:** A security audit that verifies a vendor's controls have operated effectively over a 6-12 month period, making it more rigorous than Type I and the standard most enterprise CISOs require before approving a new SaaS vendor.

**SCIM (System for Cross-domain Identity Management):** A protocol for automating user provisioning and deprovisioning across SaaS tools, required by most enterprise IT teams to manage access centrally rather than manually per tool.