# On-call onboarding checklist: 30-day ramp for new engineers

*May 4, 2026*

Updated May 4, 2026

> **TL;DR:** A structured 30-day ramp fixes common on-call onboarding challenges by dividing preparation into phases: setup, shadow, paired response, and solo shifts. Teams using Slack-native platforms like incident.io can reduce coordination overhead, letting new engineers focus on troubleshooting rather than navigating multiple browser tabs at 3 AM.

The best way to train a new on-call engineer is not to hand them a stack of runbooks. It's to give them fewer tools to manage during an incident.

When a new engineer faces their first P1 at 3 AM, the process is often split across PagerDuty, a Slack thread, a Google Doc, Datadog, and a Jira ticket nobody linked to anything useful. Every tool switch burns time and adds cognitive load. According to [on-call best practices research](https://devops.com/on-call-rotation-best-practices-reducing-burnout-and-improving-response/) from DevOps.com, engineers without documented procedures are forced to "wing it" under pressure, leading to longer MTTR.

This guide gives you a concrete 30-day checklist to move new hires from day-one setup to confident solo shifts, with specific milestones for each week and guidance on how modern AI incident response tools remove the coordination overhead that makes early on-call shifts so painful.

## Accelerate on-call readiness for new hires

### The hidden tax of unplanned on-call ramps

Ad-hoc on-call ramps leave new engineers without escalation paths, forcing them to decide at 2 AM whether to wake the database team, a decision nobody should make alone for the first time. When new engineers absorb the incident management process through observation, three problems compound quickly.

First, tribal knowledge stays locked in senior engineers' heads. The new hire can't page the right team at 11 PM because the escalation path isn't documented or easily accessible. Second, tool sprawl creates a "tool whack-a-mole" loop where each switch between PagerDuty, Datadog, Slack, and Jira costs orientation time. Third, the anxiety compounds: every fumbled alert makes the next shift harder to approach calmly, and that directly increases MTTR. Teams that implement shadow rotations and documented escalation paths consistently reduce MTTR compared to ad-hoc ramps where new engineers learn the process under live pressure.

The good news: structured onboarding plus the right tooling significantly compresses ramp time. Structured on-call programs reduce the new-hire mentoring burden, returning senior SRE capacity to proactive reliability work instead of hand-holding during incidents.

### Equipping new hires for first on-call

**Prerequisites:** Before the 30-day ramp begins, confirm your new hire understands the following baseline concepts:

* **MTTR (Mean Time To Resolution):** The average time it takes to fully resolve a failure, including detection, diagnosis, repair, and ensuring the failure won't happen again a key metric for on-call efficiency.
* **Runbook vs. playbook:** A [runbook](https://incident.io/blog/what-are-runbooks) provides step-by-step instructions for resolving a specific incident type. A [playbook](https://www.nobl9.com/it-incident-management/runbook-example) is broader, defining roles, communication plans, and decision frameworks across incident categories.
* **Cognitive load:** The mental overhead of processing multiple tools and streams of information simultaneously during a high-stress incident.
* **Owner-operator model:** Engineers own their services end to end, including operating them when they break at 3 AM. This creates tighter feedback loops and deeper system ownership.
* **Tier-1 on-call:** The first line of response, responsible for triaging alerts and escalating to the right team if the issue falls outside their scope.
* **SRE (Site Reliability Engineer):** Engineers focused on building and maintaining reliable, scalable systems through a combination of software engineering and operations practices.

## On-call setup: tools and initial configuration from days 1-5

### Verify on-call tool permissions

Early in the onboarding process, run a permissions audit with the new engineer. Industry best practice recommends confirming access to every tool in the incident response stack before the first shift. Specifically, confirm the following:

* **Alerting platform:** Can view on-call schedules, acknowledge alerts, and see escalation policies. Use [Slack schedule sync](https://help.incident.io/articles/7911237669-how-to-sync-your-on-call-schedules-with-slack-user-groups) to keep on-call membership current.
* **Monitoring tools:** Read-only access to Datadog, Prometheus, or Grafana dashboards for their services.
* **Task tracking:** Access to Jira or Linear for creating and viewing follow-up tickets.
* **Code repositories:** Read access to service repositories on GitHub.
* **Status page:** Viewer access to internal and external status pages so they understand what customers see during an incident.
* **Incident channel:** Confirmed membership in the primary #incidents Slack channel.

### Verify incident documentation

New engineers should spend time early in onboarding reading the top five runbooks for the services they own and updating any step that's confusing or out of date. The rule is simple: if you can't follow a runbook step without external help, rewrite it before moving on. This serves two purposes: they absorb the process and they improve documentation for the next person.

### Locate key service details

This is where Slack-native incident management platforms remove a major friction point. Instead of asking a new engineer to find the service owner in one tool, dependencies in another, and the runbook in Confluence, incident.io's Service Catalog surfaces all context directly inside the incident channel the moment an alert fires.

Use the early setup phase for the new engineer to walk through the Service Catalog, identify owners and dependencies for their top three services, and confirm they can navigate to runbook links without assistance from a senior colleague.

### Set up Slack incident channels

During the setup phase, have the new hire practice these commands in a test environment:

* `/inc [description]` to declare a new incident (e.g., `/inc eu-west-1 is down`)
* `/inc help` to open the command picker
* `/inc action [description]` to log a response action

### Automating on-call escalation

Automated escalation paths remove the most anxiety-inducing decision a new on-call engineer faces. When escalation is manual, the new hire has to find who's on-call for the database team, check availability, and decide whether the issue warrants waking someone up at midnight. That decision, made under pressure for the first time, is where mistakes happen.

With incident.io's automated escalation features, managers can verify that escalation policies are configured and that the new engineer appears correctly in the rotation before their first shift goes live. The platform routes alerts automatically, removing the "who do I call?" decision from the new hire's mental load entirely.

## Acquiring live incident context from days 6-12

### On-call shadowing for fast ramp

A shadow shift is when the new engineer observes an experienced colleague handling a real incident without taking any action themselves. A reverse shadow shift flips the dynamic: the new engineer drives the response while the senior colleague observes and guides without taking over. According to [on-call onboarding research](https://blog.3d-logic.com/2024/08/09/on-call-manual-onboarding-a-new-person-to-the-on-call-rotation/) from 3D Logic, scheduling shadow and reverse shadow shifts helps new engineers apply what they've learned while it's still fresh.

A recommended approach is to schedule multiple shadow shifts followed by multiple reverse shadow shifts. The [on-call as it should be](https://youtube.com/watch?v=r9gIXhRwSGU) video from the incident.io team walks through how structured mentorship and progressive responsibility, including shadow and reverse shadow rotations, are built into how incident.io approaches on-call design.

### New hire incident observation guide

During shadow shifts, have the new engineer focus on this specific observation checklist rather than trying to absorb everything at once:

* **Alert source:** Where did the alert originate (Datadog threshold, Prometheus rule, synthetic monitor)?
* **Channel creation:** How was the incident channel named and who was auto-paged?
* **Role assignment:** Who became the incident commander and how was that communicated to the channel?
* **Escalation trigger:** At what point did the senior engineer decide to page another team?
* **Status page update:** When and how was the status page updated during the incident?

### Simulated incident Slack commands

**Step-by-step instructions for a simulated incident using incident.io:**

Run this simulation in a designated test channel before any live incident. The entire flow happens in Slack, so the simulation feels identical to real production response.

1. **Declare the incident.** Type `/inc staging API latency spike` to open a simulated incident. incident.io creates a dedicated channel and begins capturing the incident timeline.
2. **Assign a role.** Type `/inc assign @newengineer` to practice the incident commander assignment flow.
3. **Set severity.** Type `/inc severity high` to classify the incident and trigger the appropriate workflow.
4. **Log an action.** Type `/inc action rolling back deploy v2.4.1` to add a response step to the captured timeline.
5. **Escalate.** Type `/inc escalate @database-team` to simulate pulling in a specialist team.
6. **Resolve.** Type `/inc resolve connection pool exhaustion, fix applied` to close the incident and trigger the post-mortem draft.

The [incident.io Slack response platform](https://incident.io/incident-response-slack) runs this entire flow inside Slack, so there's no separate training environment to configure or sandbox to maintain separately from production.

### Attending incident post-mortems

"Post-mortem archaeology" is the time engineers spend reconstructing incident timelines from memory and scattered tools. For a new engineer, reviewing a post-mortem assembled days after an incident teaches them less because critical context is already missing. Because incident.io captures timelines, role changes, and call transcriptions in real time, the post-mortem draft is substantially complete before anyone starts writing. Assign new engineers to read two or three auto-drafted post-mortems during their shadow week, and they learn from complete, timestamped data rather than reconstructed summaries.

## Supported first-incident rotation from days 13-21

### Structuring pair on-call shifts

In a paired shift, the new engineer drives and the senior engineer navigates. The senior colleague stays available in the incident channel but does not take over unless the new hire needs guidance. Setting a clear timebox for when to escalate helps establish expectations without creating dependency on the senior engineer to resolve everything.

### New hire escalation guide

Clear escalation rules remove the most anxiety-inducing decision a junior engineer faces: "Is this bad enough to wake someone up?" Map explicit criteria to your severity levels:

* **P3:** Handle solo. Log actions in the incident channel. Page no one additional. Target resolution during normal workflow.
* **P2:** Handle with your pair. Escalate to senior on-call if no progress within a reasonable timeframe.
* **P1:** Page the incident commander promptly. Do not troubleshoot alone. Escalate quickly after acknowledgment.
* **P0:** Page incident commander and engineering leadership simultaneously. Coordinate troubleshooting with the incident team.

### Building on-call confidence for new SREs

Psychological safety is not optional for new on-call engineers. Blameless cultures encourage engineers to share findings openly rather than hiding mistakes. Teams running blameless debriefs after every paired shift see faster confidence-building in new responders compared to teams that skip the debrief.

Make the expectations explicit from day one: every mistake during paired shifts is a learning event, not a performance issue.

Use the paired response phase to run the new engineer through real P2 incidents as the primary responder with the senior engineer in observer mode. P2 incidents are the ideal training ground: customer impact is limited, the pressure is real but not catastrophic, and the timeline is short enough to debrief the same day.

## Confident on-call: no more fumbles from days 22-30

### Solo on-call shift checklist

Before the new engineer takes their first solo shift, confirm they've completed the following:

* All monitoring, alerting, task tracking, and status page tools are accessible with correct permissions, tested not just granted in admin settings.
* Service Catalog reviewed: can identify owners and dependencies for the top five services without assistance.
* Runbooks updated: has read and updated at least three runbooks based on their own findings during the ramp.
* Shadow shifts complete: multiple shadow shifts and reverse shadow shifts with documented observation notes.
* Simulated incidents run: completed the full `/inc` command flow from declaration to resolved post-mortem at least twice without guidance.
* Escalation criteria memorized: can state P0-P3 escalation rules without referencing documentation.
* P2 resolutions: successfully led at least two real P2 incidents during the paired phase.

### Defined escalation paths

Severity levels only work if they're written down and enforced before the new hire's first solo shift:

| Severity | Definition | Escalation |
| --- | --- | --- |
| P0 | Complete service outage, all customers affected | Engineering leadership, immediately |
| P1 | Major feature broken, significant customer impact | Incident commander promptly |
| P2 | Partial degradation, some customers affected | Senior on-call if no progress within reasonable timeframe |
| P3 | Minor issue, minimal customer impact | Solo resolution, normal workflow timeline |

### P1 incident readiness steps

**Common pitfalls during a first P1 and how to avoid them:**

* **Analysis paralysis:** New engineers freeze when they can't immediately identify the root cause. Fix this with a clear escalation trigger: if you can't categorize the incident type quickly, escalate. Troubleshooting and categorization are separate activities that don't have to happen in sequence.
* **Late escalation:** Waiting too long to page additional teams is the most common failure mode in junior P1 response. Make the trigger automatic with incident.io escalation policies rather than relying on the new engineer's judgment under pressure.
* **Forgotten status page updates:** Updating the status page is the task most likely to slip during a P1. incident.io can streamline status page updates, reducing this burden on`/inc resolve` the new engineer's mental checklist.
* **Duplicate troubleshooting:** Without a centralized timeline, new engineers often retry fixes that senior engineers already ruled out earlier. The auto-captured timeline in incident.io surfaces every logged action in real time, preventing duplicated effort.

### Confirm new hire on-call skills

**Checks and validation for manager sign-off:**

A manager should not move a new hire to solo on-call without confirming the following with documented evidence:

1. **Documentation access:** All tool permissions verified and tested through live simulation, not just granted in admin settings.
2. **Shadow completion:** Multiple shadow shifts and reverse shadow shifts completed with observation notes from the senior engineer for each session.
3. **Simulation performance:** Completed multiple full simulated incidents using `/inc` commands with no guidance required after the first run-through.
4. **Paired P2 resolution:** Successfully led real P2 incidents with the senior engineer in observer mode, meaning no intervention was required.
5. **Escalation recital:** Can verbally state the escalation criteria and name the correct contact for each severity level without referencing documentation.

## AI-driven learning from incident data

### Post-mortem roles for new SREs

Assign new engineers to co-author at least one post-mortem during the solo on-call phase. Writing a post-mortem from a captured timeline forces the new hire to understand exactly what happened, why each decision was made, and what the follow-up actions are designed to prevent. This builds system knowledge that would otherwise take months to accumulate from architecture documents alone.

### Automated incident timeline creation

**How incident.io's automation streamlines onboarding tasks:**

Traditional incident response pulls one engineer away from troubleshooting to document decisions in a Google Doc. For a new engineer in their first paired shift, this role is a distraction from the learning they need.

incident.io removes the designated note-taker entirely. The platform automatically captures every role change with timestamps, records conversations as part of the live timeline, and can transcribe incident calls in real time without anyone manually typing notes.

### Tracking post-incident tasks

Follow-up task tracking is where post-incident learning breaks down most often. A post-mortem identifies five action items, two get assigned, and three disappear. For new engineers still building the habit of closing the loop, this is particularly damaging because it signals that post-mortems don't matter.

incident.io can automatically create follow-up tasks in Jira or Linear when the incident resolves, with timeline context attached. New engineers don't need to manually transfer notes from the incident channel into a Jira ticket. The task exists and it's assigned.

### Documenting learnings for on-call

Close every new engineer's ramp with a runbook update. Any step that was unclear, missing, or wrong during their paired and solo shifts goes back into the runbook before the ramp is marked complete. This builds a feedback loop that continuously improves documentation for future hires while giving the new engineer genuine ownership over the systems they support.

## Downloadable 30-day onboarding checklist

For more on selecting the right underlying incident management platform, see the incident.io [guide on incident management software](https://incident.io/blog/how-to-choose-incident-management-software).

| Week | Phase | Key actions | Success criteria |
| --- | --- | --- | --- |
| Week 1 (Days 1–5) | Setup and access | Permissions audit, runbook review, Service Catalog walkthrough, simulated /inc commands | All tools accessible, runbooks reviewed and updated, escalation paths confirmed |
| Week 2 (Days 6–12) | Shadow shifts | Shadow shifts, reverse shadow shifts, post-mortem review | Observation checklist completed for shifts, can explain alert-to-resolution flow without prompting |
| Week 3 (Days 13–21) | Paired response | Incident commander role in paired P2 shifts, escalation criteria memorized | Real P2 incidents led without senior intervention, blameless debrief completed after each |
| Week 4 (Days 22–30) | Solo on-call | Manager readiness checklist sign-off, first solo shift, post-mortem co-authored | Manager validation complete, post-mortem published, runbook updated |

incident.io's Pro plan is $45/user/month with on-call ($25 base + $20 on-call add-on). No per-incident fees, no hidden add-ons.

### Week 1: on-call setup and access

* Confirm alerting platform access (on-call schedule visibility, escalation policy view)
* Confirm monitoring tool access (Datadog, Prometheus, or Grafana dashboards)
* Confirm task tracking access (Jira or Linear)
* Confirm GitHub read access to service repositories
* Confirm status page access (internal and external)
* Read and update runbooks for owned services
* Walk through Service Catalog: identify owners and dependencies for services
* Complete simulated incident using `/inc` commands from declaration to post-mortem draft

### Week 2: shadowing incident response

* Complete shadow shifts using the observation checklist above
* Complete reverse shadow shifts (new hire drives, senior observes)
* Review auto-drafted post-mortems from recent real incidents

### Week 3: co-piloted incident resolution

* Carry the pager with senior engineer in observer role (not active participant)
* Apply a clear escalation timebox for P2 incidents without exception
* Practice full `/inc` command flow in live incidents (assign, escalate, resolve)
* Complete blameless debrief after each paired shift with documented notes
* Successfully lead real P2 incidents before solo advancement is approved

### Week 4: confident on-call management

* Complete manager readiness sign-off using the validation checklist
* Take first solo on-call shift
* Co-author one post-mortem from a timeline-captured incident
* Update runbooks based on personal experience from the solo shift
* Provide feedback on onboarding quality for continuous improvement

[Schedule a demo](https://incident.io/demo) to see how this checklist maps to a real automated workflow and walk through customizing it for your on-call rotation.

## Key terms glossary

**Blameless debrief**: A structured post-incident review where findings are shared openly without attributing fault to individuals. Teams use blameless debriefs to surface process gaps and improve runbooks without discouraging engineers from reporting mistakes.

**Incident commander**: The engineer responsible for coordinating response during a live incident. The incident commander assigns roles, manages communication, and decides when to escalate or resolve.

**MTTR (Mean Time To Resolution)**: The average time from incident detection to full resolution, including diagnosis, repair, and verification. MTTR is the primary benchmark for measuring on-call efficiency.

**On-call rotation**: The scheduled cycle that determines which engineer carries the pager and is responsible for responding to alerts during a defined period.

**Owner-operator model**: A culture where engineers own their services end to end, including responding when those services break. Tighter feedback loops between building and operating produce more reliable systems.

**Post-mortem**: A structured document that captures what happened during an incident, why it happened, and what follow-up actions will prevent recurrence. Post-mortems are distinct from blameless debriefs: debriefs are live conversations, post-mortems are written records.

**Reverse shadow shift**: An on-call training shift where the new engineer drives the incident response and the senior engineer observes without taking over. Follows shadow shifts in the ramp sequence.

**Shadow shift**: An on-call training shift where the new engineer observes an experienced colleague handling a real incident without taking any action. Precedes reverse shadow shifts in the ramp sequence.

**Severity level (P0–P3)**: A four-tier classification system for incidents based on customer impact and urgency. P0 is a full outage requiring immediate leadership escalation. P3 is a minor issue resolved during normal workflow.

**SRE (Site Reliability Engineer)**: An engineer focused on building and maintaining reliable, scalable systems through a combination of software engineering and operations practices. SREs typically own Tier-1 on-call rotations.