ilert seamlessly connects with your tools using our pre-built integrations or via email. ilert integrates with monitoring, ticketing, chat, and collaboration tools.
See how industry leaders achieve 99.9% uptime with ilert
Organizations worldwide trust ilert to streamline incident management, enhance reliability, and minimize downtime. Read what our customers have to say about their experience with our platform.
This article shares what felt fastest in the day‑to‑day development of ilert‑ui, a large React + TypeScript app with many lazy routes. We first moved off Create React App (CRA) toward modern tooling, trialed Vite for local development, and ultimately landed on webpack‑dev‑server + React Fast Refresh.
Scope: Local development only. Our production builds remain on Webpack. For context, the React team officially sunset CRA on February 14, 2025, and recommends migrating to a framework or a modern build tool such as Vite, Parcel, or RSBuild.
Qualitative field notes from ilert‑ui: We didn’t run formal benchmarks; this is our day‑to‑day experience in a large route‑split app.
Mini‑glossary
A few helpful terms you will encounter in this article.
ESM: Native JavaScript module system browsers understand.
HMR: Swaps changed code into a running app without a full reload.
React Fast Refresh: React’s HMR experience that preserves component state when possible.
Lazy route / code‑splitting: Loading route code only when the route is visited.
Vendor chunk: A bundle of shared third‑party deps cached across routes.
Eager pre‑bundling: Bundling common deps up front to avoid many small requests later.
Dependency optimizer (Vite): Pre‑bundles bare imports; may re‑run if new deps are discovered at runtime.
Type‑aware ESLint: ESLint that uses TypeScript type info – more accurate, heavier.
Why we left CRA
Problem statement: ilert‑ui outgrew CRA’s convenience defaults as the app matured.
Here are the reasons that pushed us away from CRA:
Customization friction: Advanced webpack tweaks (custom loaders, tighter split‑chunks strategy, Babel settings for react-refresh) required ejecting or patching. That slowed iteration on a production‑scale app.
Large dependency surface: react-scripts brought many transitive packages. Installs got slower, and security noise grew over time without clear benefits for us.
Goals for the next steps:
Keep React + TS.
Improve time‑to‑interactive after server start.
Preserve state on edits (Fast Refresh behavior) and keep HMR snappy.
Maintain predictable first‑visit latency when navigating across many lazy routes.
Why Vite looked like a better solution
During development, Vite serves your source as native ESM and pre‑bundles bare imports from node_modules using esbuild. This usually yields very fast cold starts and responsive HMR.
What we loved immediately
Cold starts: Noticeably faster than our CRA baseline.
Minimal config, clean DX: Sensible defaults and readable errors.
Great HMR in touched areas: Editing within routes already visited felt excellent.
Where the model rubbed against our size
In codebases with many lazy routes, first‑time visits can trigger bursts of ESM requests, and when new deps are discovered at runtime, dependency‑optimizer re‑runs that reload the page. This is expected behavior, but it made cross‑route exploration feel uneven for us.
Qualitative field notes from ilert‑ui
Methodology: qualitative observations from daily development in ilert‑ui.
Our repo’s shape
Dozens of lazy routes, several heavy sections pulling in many modules.
Hundreds of shared files and deep store imports across features.
What we noticed
First‑time heavy routes: Opening a dependency‑rich route often triggered many ESM requests and sometimes a dep‑optimizer re‑run. Cross‑route exploration across untouched routes felt slower than our webpack setup that eagerly pre‑bundles shared vendors.
Typed ESLint overhead: Running type‑aware ESLint (with parserOptions.project or projectService) in‑process with the dev server added latency during typing. Moving linting out‑of‑process helped, but didn’t fully offset the cost at our scale – an expected trade‑off with typed linting.
TL;DR for our codebase: Vite was fantastic once a route had been touched in the session, but the first visits across many lazy routes were less predictable.
Why we pivoted to webpack‑dev‑server + React Fast Refresh
What we run:
webpack‑dev‑server with HMR.
React Fast Refresh via @pmmmwh/react-refresh-webpack-plugin and react-refresh in Babel.
Webpack SplitChunks for common vendor bundles; filesystem caching; source maps; error overlays; ESLint out‑of‑process.
Why it felt faster end‑to‑end for our team:
Eager vendor pre‑bundling: We explicitly pre‑bundle vendor chunks (React, MUI, MobX, charts, editor, calendar, etc.). The very first load is a bit heavier, but first‑time visits to other routes are faster because shared deps are already cached. SplitChunks makes this predictable.
React Fast Refresh ergonomics: Solid state preservation on edits, reliable error recovery, and overlays we like.
Non‑blocking linting: Typed ESLint runs outside the dev server process, so HMR stays responsive even during large type checks.
1// vite.config.ts - Vite optimizeDeps includes we tried2exportdefault defineConfig({
3optimizeDeps: {
4include: [
5"react",
6"react-dom",
7"react-router-dom",
8"@mui/material",
9"@mui/icons-material",
10"@mui/lab",
11"@mui/x-date-pickers",
12"mobx",
13"mobx-react",
14"mobx-utils",
15"axios",
16"moment",
17"lodash.debounce",
18"lodash.isequal",
19"@loadable/component",
20"react-transition-group",
21"react-window",
22"recharts",
23"reactflow",
24"@monaco-editor/react",
25"monaco-editor",
26"@fullcalendar/core",
27"@fullcalendar/react",
28"@fullcalendar/daygrid",
29 ],
30// Force pre-bundling of these dependencies31force: true,
32 },
33});
34
Result: Helped some cold starts, but for our repo, didn’t smooth out first‑visit latency across many lazy routes as much as eager vendor chunks in webpack.
What we tried to speed up vite (and what we didn’t)
What we tried in Vite
Run ESLint in a separate process What it does: Lints in the background instead of blocking the dev server. Impact: Faster feedback while editing.
Enable a filesystem cache What it does: Reuses build results across restarts. Impact: Quicker cold starts and rebuilds.
Pre-bundle third-party code (vendor split) What it does: Bundles libraries like React once and keeps them separate from app code. Impact: Less work on every save; snappier HMR.
These tweaks made Vite feel better – but they weren’t enough to solve our bigger performance issues, which is why we evaluated Webpack.
Things we could have tried
More aggressive optimizeDeps tuning Why we skipped: Can help large projects, but needs careful profiling and ongoing dependency hygiene. The time cost outweighed the likely gains for us.
“Warm crawl” on server start What it is: A Script that visits routes at startup to pre-load modules and caches. Why we skipped: Extra complexity and inconsistent payoff in real projects.
Pin versions for linked packages What it is: Lock versions in a mono-repo to reduce Vite’s re-optimization churn. Why we skipped: Useful in some setups, but adds maintenance overhead; not worth it before a larger rework.
Pros and cons (in our context)
Vite – pros
Blazing cold starts and lightweight config.
Excellent HMR within already‑touched routes.
Strong plugin ecosystem and modern ESM defaults.
Vite – cons
Dep optimizer re‑runs can interrupt flow during first‑time navigation across many lazy routes.
Requires careful setup in large monorepos and with linked packages.
Typed ESLint in‑process can hurt responsiveness on large projects; better out‑of‑process.
Webpack + Fast Refresh – pros
Predictable first‑visit latency across many routes via eager vendor chunks.
Fine‑grained control over loaders, plugins, and output.
Fast Refresh preserves state and has mature error overlays.
Webpack + Fast Refresh – cons
Heavier initial load than Vite’s cold start.
More configuration surface to maintain.
Historical complexity (mitigated by modern config patterns and caching).
Quick performance tests you can run locally to test your project
These checks are quick, human-scale benchmarks — no profiler required. Use a stopwatch and DevTools to compare real interaction delays and perceived smoothness.
Cold start
How to test: Restart the dev server and measure the time from running npm run dev to the first interactive page load (a stopwatch works fine).
Observation: Vite typically starts faster on a cold boot. Webpack, when using its filesystem cache, remains acceptable once warmed.
First-time heavy route
How to test: Open a dependency-rich route for the first time. Watch DevTools → Network and Console for optimizer runs or request bursts.
Observation: Vite may occasionally trigger re-optimization and reloads. Webpack’s vendor chunks tend to make first visits steadier.
Cross-route navigation
How to test: Navigate through several untouched routes and note responsiveness until each becomes interactive.
Observation: Vite improves after initial loads (as modules are cached). Webpack stays consistently predictable across routes.
Linting impact
How to test: Compare ESLint running in-process versus in a separate process. Measure typing responsiveness and HMR smoothness.
Observation: Running ESLint out-of-process kept the dev server responsive and maintained smooth HMR in both setups.
Balanced guidance – when we would pick each
Choose Vite if:
Cold starts dominate your workflow.
Your module graph isn’t huge or fragmented into many lazy routes.
Plugins – especially typed ESLint – are light or run out‑of‑process.
Choose webpack + Fast Refresh if:
Your app benefits from eager vendor pre‑bundling and predictable first‑visit latency across many routes.
You want precise control over loaders/plugins and build output.
You like Fast Refresh’s state preservation and overlays.
Conclusions
Both Vite and Webpack are excellent. Given ilert‑ui’s current size and navigation patterns, webpack‑dev‑server + React Fast Refresh delivers the tightest feedback loop for us today – based on qualitative developer experience, not micro‑benchmarks. We’ll keep measuring as our codebase evolves and may revisit Vite or a framework as our constraints change.
ilert’s engineering team has developed an open Model Context Protocol (MCP) server that enables AI assistants to securely interact with your alerting and incident management workflows, from determining who is on call to creating incidents. In this article, we provide a simple explanation of MCP, outline the reasons behind our investment in it, describe the high-level architecture, and explain how to connect Claude, Cursor, and other MCP clients to ilert today.
MCP in a nutshell, and why it matters
The Model Context Protocol (MCP) is an open standard that connects AI assistants to external tools and data in a uniform way. Rather than relying on bespoke plugins, MCP defines standard interfaces for tools, resources, and transports. This enables assistants such as Claude, ChatGPT, and IDE agents to consistently perform actions such as reading data, running processes, and streaming results, while incorporating auditability and permissions. Think of MCP as a 'USB-C for AI apps' that eliminates brittle UI automation and bespoke glue code.
Many popular clients already support MCP flows. For example, Claude Desktop exposes MCP servers via Desktop Extensions and local/remote connectors, while Cursor adds MCP servers under its Tools & MCP settings, enabling commands to be used directly within the IDE chat.
For operations teams, this means that your assistant can read data such as incidents, alerts and on-call schedules, and act on it by creating, acknowledging, or escalating using permissioned, auditable calls rather than screen scraping.
Why we built an open MCP server for ilert
Teams are increasingly using AI agents to triage and collaborate in the environments in which they already work, such as chat, terminals, and IDEs. At ilert, our goal is to bring incident response and alerting into these environments with secure, least-privilege access and clear audit trails. An MCP server reduces handoffs and context switching.
Problem statement: Provide assistants with a safe and consistent way to manage alerts and incidents across tools without the need for custom integrations for each client.
Outcome: MCP enables ilert to expose capabilities once and make them immediately available to multiple assistants.
The implementation uses the official TypeScript SDK to provide protocol-compliant tools, resources, and prompts. We expose a remote server using the Streamable HTTP transport defined by MCP alongside stdio. Streamable HTTP provides reliable streaming, resumable sessions, and simple authentication headers, making it well-suited to enterprise environments.
How we map ilert to MCP
ilert’s MCP server exposes direct, tool-based actions that map 1:1 to the ilert API – built for DevOps and SRE workflows. Assistants can safely read context and take action on Alerts and Incidents without brittle UI scripting.
What you can do:
Manage alerts – list, inspect, comment, acknowledge, resolve, escalate, reroute, add responders, and run predefined alert actions.
Open incidents – create incidents with severity, service, and responders directly from the assistant.
Look up context – find users, services, alert sources, escalation policies, schedules, and your own profile to act with confidence.
Typical flow:
Discover context with read tools (for example, find services → list alerts → show alert details).
Propose and confirm a write action (for example, accept or resolve an alert, create an incident, or invoke an alert action).
Keep everything auditable and permissioned via your ilert API key scopes.
How to use the ilert MCP server
Step 1: Create an ilert API key In ilert, go to Profile → API Keys and create a user API key. Use least-privilege scopes and store the key securely.
Step 2: Configure your MCP client (recommended: Streamable HTTP in Cursor) In Cursor → Settings → Tools & MCP → Add New MCP Server, add the following to your mcp.json:
After saving the configuration, ilert should appear in your MCP server list, and its tools will be available in the client UI. For more information, check the documentation.
A few real scenarios
Scenario 1: Create an Alert in ilert
Cursor interface
Scenario 2: Comment on the incident and resolve it
Cursor interface
Conclusions
MCP gives operations teams a standardised way to integrate Incident Response and Alerting into AI assistants. ilert’s open MCP server, built with Deno and TypeScript using the official MCP SDK, securely exposes Incidents, Alerts, On-call information, and more over a remote transport. Connect Claude, Cursor, or ChatGPT today and manage Incidents directly from your assistant.
Call flows let you design voice workflows with nodes like “Audio message,” “Support hours,” “Voicemail,” “Route call,” and much more. The ilert Terraform provider now includes a ilert_call_flowresource so you can version and promote these flows across environments. This blog post offers an overview of managing call flows in Terraform, detailing the benefits and key scenarios.
Benefits of managing call flows via the Terraform provider
The ilert_call_flow Terraform resource enables you to define node-based call flows as code, alongside alert sources, escalation policies, and on-call schedules. This brings call routing under the same IaC process you may already use for ilert.
Storing call flows in Terraform makes changes controlled, testable, and auditable. You gain code review, diffs before apply, consistent promotion between staging and production, and easy rollback. Teams can also import existing UI-created resources into state to avoid rebuilds.
Simple call flow: create alert
A call flow for creating alerts when somebody picks up the call looks like this:
What happens: ROOT waits for an incoming call. On ANSWERED, the flow creates an alert in the given alert source, so your existing escalation policies take over.
Please note that phone numbers cannot be assigned via Terraform. After the first application, assign a number to the call flow in the web UI.
Complex call flow: Support hotline
You can create more sophisticated flows, too. For example, a support hotline with branches during or outside of business hours. Start with the ROOT node and open the first path by reacting when the call is answered.