AI-First Incident Management That Works While You Sleep

AI SRE agents that triage, fix, and update status, while keeping humans in control.
Bechtle
GoInspire
Lufthansa Systems
NTT Data
Bertelsmann
REWE Digital
Benefits

AI-first technology for modern teams with fast response times

ilert is the AI-first incident management platform with AI capabilities spanning the entire incident response lifecycle.

Integrations

Get started immediately using our integrations

ilert seamlessly connects with your tools using our pre-built integrations or via email. ilert integrates with monitoring, ticketing, chat, and collaboration tools.

Transform your Incident Response today – start free trial

Start for free
Stay up to date

Expert insights from our blog

Engineering

Webpack Fast Refresh vs Vite: What was Faster for ilert‑ui

A qualitative look at ilert-ui’s local dev: comparing Vite and webpack Fast Refresh to see what truly improves daily DX.

Jan Arnemann
Oct 29, 2025 • 5 min read

This article shares what felt fastest in the day‑to‑day development of ilert‑ui, a large React + TypeScript app with many lazy routes. We first moved off Create React App (CRA) toward modern tooling, trialed Vite for local development, and ultimately landed on webpack‑dev‑server + React Fast Refresh.

Scope: Local development only. Our production builds remain on Webpack. For context, the React team officially sunset CRA on February 14, 2025, and recommends migrating to a framework or a modern build tool such as Vite, Parcel, or RSBuild.

Qualitative field notes from ilert‑ui: We didn’t run formal benchmarks; this is our day‑to‑day experience in a large route‑split app.

Mini‑glossary

A few helpful terms you will encounter in this article.

  • ESM: Native JavaScript module system browsers understand.
  • HMR: Swaps changed code into a running app without a full reload.
  • React Fast Refresh: React’s HMR experience that preserves component state when possible.
  • Lazy route / code‑splitting: Loading route code only when the route is visited.
  • Vendor chunk: A bundle of shared third‑party deps cached across routes.
  • Eager pre‑bundling: Bundling common deps up front to avoid many small requests later.
  • Dependency optimizer (Vite): Pre‑bundles bare imports; may re‑run if new deps are discovered at runtime.
  • Type‑aware ESLint: ESLint that uses TypeScript type info – more accurate, heavier.

Why we left CRA

Problem statement: ilert‑ui outgrew CRA’s convenience defaults as the app matured.

Here are the reasons that pushed us away from CRA:

  • Customization friction: Advanced webpack tweaks (custom loaders, tighter split‑chunks strategy, Babel settings for react-refresh) required ejecting or patching. That slowed iteration on a production‑scale app.
  • Large dependency surface: react-scripts brought many transitive packages. Installs got slower, and security noise grew over time without clear benefits for us.

Goals for the next steps:

  • Keep React + TS.
  • Improve time‑to‑interactive after server start.
  • Preserve state on edits (Fast Refresh behavior) and keep HMR snappy.
  • Maintain predictable first‑visit latency when navigating across many lazy routes.

Why Vite looked like a better solution

During development, Vite serves your source as native ESM and pre‑bundles bare imports from node_modules using esbuild. This usually yields very fast cold starts and responsive HMR.

What we loved immediately

  • Cold starts: Noticeably faster than our CRA baseline.
  • Minimal config, clean DX: Sensible defaults and readable errors.
  • Great HMR in touched areas: Editing within routes already visited felt excellent.

Where the model rubbed against our size

In codebases with many lazy routes, first‑time visits can trigger bursts of ESM requests, and when new deps are discovered at runtime, dependency‑optimizer re‑runs that reload the page. This is expected behavior, but it made cross‑route exploration feel uneven for us.

Qualitative field notes from ilert‑ui

Methodology: qualitative observations from daily development in ilert‑ui.

Our repo’s shape

  • Dozens of lazy routes, several heavy sections pulling in many modules.
  • Hundreds of shared files and deep store imports across features.

What we noticed

  1. First‑time heavy routes: Opening a dependency‑rich route often triggered many ESM requests and sometimes a dep‑optimizer re‑run. Cross‑route exploration across untouched routes felt slower than our webpack setup that eagerly pre‑bundles shared vendors.
  2. Typed ESLint overhead: Running type‑aware ESLint (with parserOptions.project or projectService) in‑process with the dev server added latency during typing. Moving linting out‑of‑process helped, but didn’t fully offset the cost at our scale – an expected trade‑off with typed linting.

TL;DR for our codebase: Vite was fantastic once a route had been touched in the session, but the first visits across many lazy routes were less predictable.

Why we pivoted to webpack‑dev‑server + React Fast Refresh

What we run:

  • webpack‑dev‑server with HMR.
  • React Fast Refresh via @pmmmwh/react-refresh-webpack-plugin and react-refresh in Babel.
  • Webpack SplitChunks for common vendor bundles; filesystem caching; source maps; error overlays; ESLint out‑of‑process.

Why it felt faster end‑to‑end for our team:

  1. Eager vendor pre‑bundling: We explicitly pre‑bundle vendor chunks (React, MUI, MobX, charts, editor, calendar, etc.). The very first load is a bit heavier, but first‑time visits to other routes are faster because shared deps are already cached. SplitChunks makes this predictable.
  2. React Fast Refresh ergonomics: Solid state preservation on edits, reliable error recovery, and overlays we like.
  3. Non‑blocking linting: Typed ESLint runs outside the dev server process, so HMR stays responsive even during large type checks.

Receipts – the knobs we turned

1// webpack.config.js
2module.exports = {
3  optimization: {
4    minimize: false,
5    runtimeChunk: "single",
6    splitChunks: {
7      chunks: "all",
8      cacheGroups: {
9        "react-vendor": {
10             test: /[\/\]node_modules[\/\](react|react-dom|react-router-dom)[\/\]/,
11          name: "react-vendor",
12          chunks: "all",
13          priority: 30,
14        },
15        "mui-vendor": {
16          test: /[\/\]node_modules[\/\](@mui\/material|@mui\/icons-material|@mui\/lab|@mui\/x-date-pickers)[\/\]/,
17          name: "mui-vendor",
18          chunks: "all",
19          priority: 25,
20        },
21        "mobx-vendor": {
22          test: /[\/\]node_modules[\/\](mobx|mobx-react|mobx-utils)[\/\]/,
23          name: "mobx-vendor",
24          chunks: "all",
25          priority: 24,
26        },
27        "utils-vendor": {
28          test: /[\/\]node_modules[\/\](axios|moment|lodash\.debounce|lodash\.isequal)[\/\]/,
29          name: "utils-vendor",
30          chunks: "all",
31          priority: 23,
32        },
33        "ui-vendor": {
34          test: /[\/\]node_modules[\/\](@loadable\/component|react-transition-group|react-window)[\/\]/,
35          name: "ui-vendor",
36          chunks: "all",
37          priority: 22,
38        },
39        "charts-vendor": {
40          test: /[\/\]node_modules[\/\](recharts|reactflow)[\/\]/,
41          name: "charts-vendor",
42          chunks: "all",
43          priority: 21,
44        },
45        "editor-vendor": {
46 test: /[\/\]node_modules[\/\](@monaco-editor\/react|monaco-editor)[\/\]/,
47          name: "editor-vendor",
48          chunks: "all",
49          priority: 20,
50        },
51        "calendar-vendor": {
52          test: /[\/\]node_modules[\/\](@fullcalendar\/core|@fullcalendar\/react|@fullcalendar\/daygrid)[\/\]/,
53          name: "calendar-vendor",
54          chunks: "all",
55          priority: 19,
56        },
57        "vendor": {
58          test: /[\/\]node_modules[\/\]/,
59          name: "vendor",
60          chunks: "all",
61          priority: 10,
62        },
63      },
64    },
65  },
66};

1// vite.config.ts - Vite optimizeDeps includes we tried
2export default defineConfig({
3  optimizeDeps: {
4    include: [
5      "react",
6      "react-dom",
7      "react-router-dom",
8      "@mui/material",
9      "@mui/icons-material",
10      "@mui/lab",
11      "@mui/x-date-pickers",
12      "mobx",
13      "mobx-react",
14      "mobx-utils",
15      "axios",
16   "moment",
17      "lodash.debounce",
18      "lodash.isequal",
19      "@loadable/component",
20      "react-transition-group",
21      "react-window",
22      "recharts",
23      "reactflow",
24      "@monaco-editor/react",
25      "monaco-editor",
26      "@fullcalendar/core",
27      "@fullcalendar/react",
28      "@fullcalendar/daygrid",
29    ],
30    // Force pre-bundling of these dependencies
31    force: true,
32  },
33});
34

Result: Helped some cold starts, but for our repo, didn’t smooth out first‑visit latency across many lazy routes as much as eager vendor chunks in webpack.

What we tried to speed up vite (and what we didn’t)

What we tried in Vite

Run ESLint in a separate process
What it does: Lints in the background instead of blocking the dev server.
Impact: Faster feedback while editing.

Enable a filesystem cache
What it does: Reuses build results across restarts.
Impact: Quicker cold starts and rebuilds.

Pre-bundle third-party code (vendor split)
What it does: Bundles libraries like React once and keeps them separate from app code.
Impact: Less work on every save; snappier HMR.

These tweaks made Vite feel better – but they weren’t enough to solve our bigger performance issues, which is why we evaluated Webpack.

Things we could have tried

More aggressive optimizeDeps tuning
Why we skipped: Can help large projects, but needs careful profiling and ongoing dependency hygiene. The time cost outweighed the likely gains for us.

“Warm crawl” on server start
What it is: A Script that visits routes at startup to pre-load modules and caches.
Why we skipped: Extra complexity and inconsistent payoff in real projects.

Pin versions for linked packages
What it is: Lock versions in a mono-repo to reduce Vite’s re-optimization churn.
Why we skipped: Useful in some setups, but adds maintenance overhead; not worth it before a larger rework.

Pros and cons (in our context)

Vite – pros

  • Blazing cold starts and lightweight config.
  • Excellent HMR within already‑touched routes.
  • Strong plugin ecosystem and modern ESM defaults.

Vite – cons

  • Dep optimizer re‑runs can interrupt flow during first‑time navigation across many lazy routes.
  • Requires careful setup in large monorepos and with linked packages.
  • Typed ESLint in‑process can hurt responsiveness on large projects; better out‑of‑process.

Webpack + Fast Refresh – pros

  • Predictable first‑visit latency across many routes via eager vendor chunks.
  • Fine‑grained control over loaders, plugins, and output.
  • Fast Refresh preserves state and has mature error overlays.

Webpack + Fast Refresh – cons

  • Heavier initial load than Vite’s cold start.
  • More configuration surface to maintain.
  • Historical complexity (mitigated by modern config patterns and caching).

Quick performance tests you can run locally to test your project

These checks are quick, human-scale benchmarks — no profiler required. Use a stopwatch and DevTools to compare real interaction delays and perceived smoothness.

Cold start

How to test: Restart the dev server and measure the time from running npm run dev to the first interactive page load (a stopwatch works fine).

Observation: Vite typically starts faster on a cold boot. Webpack, when using its filesystem cache, remains acceptable once warmed.

First-time heavy route

How to test: Open a dependency-rich route for the first time. Watch DevTools → Network and Console for optimizer runs or request bursts.

Observation: Vite may occasionally trigger re-optimization and reloads. Webpack’s vendor chunks tend to make first visits steadier.

Cross-route navigation

How to test: Navigate through several untouched routes and note responsiveness until each becomes interactive.

Observation: Vite improves after initial loads (as modules are cached). Webpack stays consistently predictable across routes.

Linting impact

How to test: Compare ESLint running in-process versus in a separate process. Measure typing responsiveness and HMR smoothness.

Observation: Running ESLint out-of-process kept the dev server responsive and maintained smooth HMR in both setups.

Balanced guidance – when we would pick each

Choose Vite if:

  • Cold starts dominate your workflow.
  • Your module graph isn’t huge or fragmented into many lazy routes.
  • Plugins – especially typed ESLint – are light or run out‑of‑process.

Choose webpack + Fast Refresh if:

  • Your app benefits from eager vendor pre‑bundling and predictable first‑visit latency across many routes.
  • You want precise control over loaders/plugins and build output.
  • You like Fast Refresh’s state preservation and overlays.

Conclusions

Both Vite and Webpack are excellent. Given ilert‑ui’s current size and navigation patterns, webpack‑dev‑server + React Fast Refresh delivers the tightest feedback loop for us today – based on qualitative developer experience, not micro‑benchmarks. We’ll keep measuring as our codebase evolves and may revisit Vite or a framework as our constraints change.

Engineering

Bring incident response to AI stack with ilert’s MCP Server

Find out what the Model Context Protocol (MCP) is, why it matters, and how ilert’s open MCP server enables AI assistants such as Claude and Cursor to create and manage alerts and incidents via a standard interface.

Tim Gühnemann
Oct 27, 2025 • 5 min read

ilert’s engineering team has developed an open Model Context Protocol (MCP) server that enables AI assistants to securely interact with your alerting and incident management workflows, from determining who is on call to creating incidents. In this article, we provide a simple explanation of MCP, outline the reasons behind our investment in it, describe the high-level architecture, and explain how to connect Claude, Cursor, and other MCP clients to ilert today.

MCP in a nutshell, and why it matters

The Model Context Protocol (MCP) is an open standard that connects AI assistants to external tools and data in a uniform way. Rather than relying on bespoke plugins, MCP defines standard interfaces for tools, resources, and transports. This enables assistants such as Claude, ChatGPT, and IDE agents to consistently perform actions such as reading data, running processes, and streaming results, while incorporating auditability and permissions. Think of MCP as a 'USB-C for AI apps' that eliminates brittle UI automation and bespoke glue code.

Many popular clients already support MCP flows. For example, Claude Desktop exposes MCP servers via Desktop Extensions and local/remote connectors, while Cursor adds MCP servers under its Tools & MCP settings, enabling commands to be used directly within the IDE chat.

For operations teams, this means that your assistant can read data such as incidents, alerts and on-call schedules, and act on it by creating, acknowledging, or escalating using permissioned, auditable calls rather than screen scraping.

Why we built an open MCP server for ilert

Teams are increasingly using AI agents to triage and collaborate in the environments in which they already work, such as chat, terminals, and IDEs. At ilert, our goal is to bring incident response and alerting into these environments with secure, least-privilege access and clear audit trails. An MCP server reduces handoffs and context switching.

Problem statement: Provide assistants with a safe and consistent way to manage alerts and incidents across tools without the need for custom integrations for each client.

Outcome: MCP enables ilert to expose capabilities once and make them immediately available to multiple assistants.

Architecture of the ilert MCP server

Tech stack

The implementation uses the official TypeScript SDK to provide protocol-compliant tools, resources, and prompts. We expose a remote server using the Streamable HTTP transport defined by MCP alongside stdio. Streamable HTTP provides reliable streaming, resumable sessions, and simple authentication headers, making it well-suited to enterprise environments.

How we map ilert to MCP

ilert’s MCP server exposes direct, tool-based actions that map 1:1 to the ilert API – built for DevOps and SRE workflows. Assistants can safely read context and take action on Alerts and Incidents without brittle UI scripting.

What you can do:

  • Manage alerts – list, inspect, comment, acknowledge, resolve, escalate, reroute, add responders, and run predefined alert actions.
  • Open incidents – create incidents with severity, service, and responders directly from the assistant.
  • Look up context – find users, services, alert sources, escalation policies, schedules, and your own profile to act with confidence.

Typical flow:

  1. Discover context with read tools (for example, find services → list alerts → show alert details).
  2. Propose and confirm a write action (for example, accept or resolve an alert, create an incident, or invoke an alert action).
  3. Keep everything auditable and permissioned via your ilert API key scopes.

How to use the ilert MCP server

Step 1: Create an ilert API key
In ilert, go to Profile → API Keys and create a user API key. Use least-privilege scopes and store the key securely.

Step 2: Configure your MCP client (recommended: Streamable HTTP in Cursor)
In Cursor → Settings → Tools & MCP → Add New MCP Server, add the following to your mcp.json:

1{
2  "mcpServers": {
3    "ilert": {
4      "type": "streamableHttp",
5      "url": "https://mcp.ilert.com/mcp",
6      "headers": { "Authorization": "Bearer {{YOUR-API-KEY}}" }
7    }
8  }
9}

Step 3 (optional): Running via a local launcher

1{
2  "mcpServers": {
3    "ilert": {
4      "command": "npx",
5      "args": [
6        "-y", "mcp-remote", "https://mcp.ilert.com/mcp",
7        "--header", "Authorization: Bearer ${ILERT_AUTH_TOKEN}"
8      ],
9      "env": { "ILERT_AUTH_TOKEN": "{{YOUR-API-KEY}}" }
10    }
11  }
12}

After saving the configuration, ilert should appear in your MCP server list, and its tools will be available in the client UI. For more information, check the documentation.

A few real scenarios

Scenario 1: Create an Alert in ilert

Cursor interface

Scenario 2: Comment on the incident and resolve it

Cursor interface

Conclusions

MCP gives operations teams a standardised way to integrate Incident Response and Alerting into AI assistants. ilert’s open MCP server, built with Deno and TypeScript using the official MCP SDK, securely exposes Incidents, Alerts, On-call information, and more over a remote transport. Connect Claude, Cursor, or ChatGPT today and manage Incidents directly from your assistant.

Engineering

How to manage ilert call flows via Terraform

The ilert Terraform provider now includes a `ilert_call_flow` resource so you can version and promote these flows across environments. This blog post offers an overview of managing call flows in Terraform, detailing the benefits and key scenarios.

Marko Simon
Oct 23, 2025 • 5 min read

Call flows let you design voice workflows with nodes like “Audio message,” “Support hours,” “Voicemail,” “Route call,” and much more. The ilert Terraform provider now includes a ilert_call_flow resource so you can version and promote these flows across environments. This blog post offers an overview of managing call flows in Terraform, detailing the benefits and key scenarios.

Benefits of managing call flows via the Terraform provider

The ilert_call_flow Terraform resource enables you to define node-based call flows as code, alongside alert sources, escalation policies, and on-call schedules. This brings call routing under the same IaC process you may already use for ilert.

Storing call flows in Terraform makes changes controlled, testable, and auditable. You gain code review, diffs before apply, consistent promotion between staging and production, and easy rollback. Teams can also import existing UI-created resources into state to avoid rebuilds.

Simple call flow: create alert

A call flow for creating alerts when somebody picks up the call looks like this:

1resource "ilert_call_flow" "call_flow" {
2  name     = "Call Flow Demo"
3  language = "en"
4
5  root_node {
6    node_type = "ROOT"
7
8    branches {
9      branch_type = "ANSWERED"
10
11      target {
12        node_type = "CREATE_ALERT"
13        metadata {
14          alert_source_id = "your_alert_source_id"
15        }
16      }
17    }
18  }
19}

What happens: ROOT waits for an incoming call. On ANSWERED, the flow creates an alert in the given alert source, so your existing escalation policies take over.

Please note that phone numbers cannot be assigned via Terraform. After the first application, assign a number to the call flow in the web UI.

Complex call flow: Support hotline

You can create more sophisticated flows, too. For example, a support hotline with branches during or outside of business hours. Start with the ROOT node and open the first path by reacting when the call is answered.

1resource "ilert_call_flow" "business_hours_support" {
2  name     = "Business Hours Support"
3  language = "en"
4
5  root_node {
6    node_type = "ROOT"
7
8    branches {
9      branch_type = "ANSWERED"
10      # ...

Then greet the caller with a short TTS welcome message.

1# ...
2target {
3    node_type = "AUDIO_MESSAGE"
4    metadata {
5      text_message   = "Thank you for calling <company name>."
6      ai_voice_model = "emma"
7    }
8# …

Now introduce Support hours and branch the flow into OUTSIDE and DURING states.

1# ...
2branches {
3  branch_type = "CATCH_ALL"
4
5  target {
6    node_type = "SUPPORT_HOURS"
7    metadata {
8      support_hours_id = …
9    }
10
11    branches {
12      branch_type = "BRANCH"
13      condition   = "context.supportHoursState == 'OUTSIDE'"
14      # ...
15    }
16
17    branches {
18      branch_type = "BRANCH"
19      condition   = "context.supportHoursState == 'DURING'"
20      # ...
21    }
22# …

Handle OUTSIDE hours next: send to voicemail, and if a recording exists, create an alert for follow-up.

1# ...
2target {
3    node_type = "VOICEMAIL"
4    metadata {
5      text_message   = "You've reached us outside of our business hours. Please leave your name, contact information, and a brief message."
6      ai_voice_model = "emma"
7    }
8
9    branches {
10      branch_type = "BRANCH"
11      condition   = "context.recordedMessageUrl != null"
12    
13      target {
14        node_type = "CREATE_ALERT"
15        metadata {
16          alert_source_id = ...
17        }
18      }
19    }
20}
21# …

Finish with DURING hours: try the primary user first, then fall back if unavailable.

1# ...
2target {
3    node_type = "ROUTE_CALL"
4    metadata {
5      call_style = "ORDERED"
6      targets {
7        type   = "ON_CALL_SCHEDULE"
8        target = "your_schedule_id"
9      }
10    }
11}
12# …

The full script with both call flow resources can be found in our public Terraform playground.

Explore all
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Our Cookie Policy
We use cookies to improve your experience, analyze site traffic and for marketing. Learn more in our Privacy Policy.
Open Preferences
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.