AI-First Incident Management That Works While You Sleep

AI SRE agents that triage, fix, and update status, while keeping humans in control.
Bechtle
GoInspire
Lufthansa Systems
Bertelsmann
REWE Digital
Benefits

AI-first technology for modern teams with fast response times

ilert is the AI-first incident management platform with AI capabilities spanning the entire incident response lifecycle.

Integrations

Get started immediately using our integrations

ilert seamlessly connects with your tools using our pre-built integrations or via email. ilert integrates with monitoring, ticketing, chat, and collaboration tools.

Transform your Incident Response today – start free trial

Start for free
Stay up to date

Expert insights from our blog

Product

From signal to action with ilert and Ekara integration

Enhance Ekara’s digital experience monitoring with ilert to reduce time-to-resolution using powerful AI features.

Daria Yankevich
Nov 25, 2025 • 5 min read

Modern SRE and IT operations run on two truths: you must see problems the way users do, and you must respond fast. With the new ilert and Ekara integration, you can turn Ekara’s powerful synthetic and real-user insights into actionable alerts and incidents in ilert – routed to the right on-call engineer, enriched with context, and communicated to stakeholders via status pages. The result: fewer surprises, faster recoveries, and happier users.

What is Ekara?

Ekara by the French company ip-label is a digital experience monitoring platform that combines synthetic monitoring (robots) and Real User Monitoring to detect and diagnose issues across web, mobile, APIs, business apps, and voice/IVR – deployed as SaaS, hybrid, or fully on-prem. Ekara offers no-code journey scripting, Edge/branch monitoring, and options like Flow AI and AI Incident Guard. The platform processes billions of measurements daily and is used by 400+ customers across 25 countries.

Ekara is used by enterprises across e-commerce, travel, finance, public sector, and contact centers to see performance the way users do. PVCP Group, for example, models key booking journeys to catch issues before they hurt conversions. Contact centers and telecoms run Ekara’s IVR/Voice probes to validate call flows and speech quality. Hybrid IT teams monitor thick-client and Citrix apps alongside web and APIs, including from edge sites with Ekara Pod. In short, it helps diverse teams spot real user problems early and act fast.

Why connect Ekara to ilert?

Ekara detects problems and sends alerts. ilert turns those alerts into actionable notifications and gets the right people moving fast if issues have a business impact.

  • Faster response: When Ekara sends an event, ilert notifies the on-call teams via voice, SMS, push, Slack, or Microsoft Teams. No manual steps, no guesswork.
  • Less noise, clearer focus: Similar alerts from the same scenario or region are grouped into one. Teams identify one problem to fix, rather than multiple duplicates.
  • AI that speeds you up: ilert offers powerful AI features, all designed to reduce the time to resolution. ilert AI summarizes incoming information, so responders start with context, not a blank page. It also helps prepare clear status updates and later assembles blameless postmortems. AI is integrated into every stage of incident response to reduce manual burden and enable teams to react quickly.
  • Keep everyone informed: using ilert public and private status pages, users can keep customers and stakeholders informed.

Step-by-step setup

ilert and Ekara schema
  1. Create Ekara's Alert source in ilert. Copy the Webhook URL.
  2. Configure Ekara to send alerts. Choose the events to forward: failure, recovery, threshold breach, and SLO alert.
  3. Test and verify. Trigger a test failure in Ekara. Confirm an alert opens in ilert and pages the current on-call.

A complete step-by-step guide is available at doc.ilert.com. If you experience any issues or have questions, feel free to reach out to the ilert support team at support@ilert.com

Product

Event Flows: Deep dive into feature

Build context-aware routing with a node-based workflow that enriches, evaluates, and routes events before they create Incidents – reliably and at scale.

Tim Nguyen Van
Nov 07, 2025 • 5 min read

Managing alert routing in complex environments is hard. When events occur, alerts must reach the right people at the right time, but traditional alert sources struggle with sophisticated, context-aware routing. Event Flows is ilert’s node-based workflow system at the heart of our alerting infrastructure. It enables intelligent event processing, time- and context-based routing, and safe automation, so teams reduce alert fatigue and accelerate incident response.

The pain points Event Flows address 

As monitoring footprints grow, standard alerting patterns show limits. Event Flows targets four recurring pain points:

  • Complex routing logic. Standard alert sources may route by priority or keywords, but real-world scenarios demand decisions based on event content, custom fields, error patterns, context, and time. Different escalation paths are needed depending on the situation.
  • Time-based routing. During support hours, alerts go to the primary on‑call; after hours, they may escalate to a different team or follow a different policy. Without Event Flows, this often requires multiple alert sources or brittle external logic.
  • Context-aware decisions. The same signal can have different meanings depending on the context. A database connection error might be critical during peak hours but informational during maintenance windows. Reducing alert noise requires routing that evaluates business context.
  • Maintenance overhead. Managing many slightly different alert sources increases operational complexity and the risk of configuration drift. A single, expressive workflow reduces duplication.

Architecture and integration 

Event Flows sits between ingestion and alert source routing to deliver smart, reliable processing. It defines how incoming events are processed, transformed, and routed through a configurable sequence of logical components. Let’s take a closer look at the key components that make Event Flows both powerful and adaptable in practice.

Node-based flow builder. A tree-structured builder defines event-processing workflows. Current node types: 

  • Branch: create scenarios based on conditions. 
  • Route to alert source: direct events to a specific alert source. 
  • Support hours: route using predefined support hours.

Queue-based processing. Events arriving via ilert’s event API are first processed by Event Flows, then routed to alert sources. Processing uses AWS SQS FIFO with message groups per API key. This preserves ordering for events from the same alert source and prevents race conditions while keeping near real-time behavior.

Deterministic execution. Each node evaluates conditions and chooses the next step. Execution context is preserved across nodes, enabling decisions that build on prior computations.

Data model. Event Flow contains multiple Event Flow Nodes; each node has Event Flow Branches defining conditions and targets, mirroring the hierarchical execution path.

Powerful automation with ICL (ilert Condition Language) 

Event Flows includes a concise expression language inspired by familiar programming patterns. Write conditions that access event data, context, and system state. For more information on ICL, refer to our documentation

What ICL can see

  • Event payload – e.g. event.priority, event.summary, event.source, custom fields
  • Time context – business‑hours, weekends, maintenance windows
  • Accumulated context – values written by earlier nodes

Examples

Route by priority:

Match text safely: 

Time-based routing using support hours:

Combine multiple checks:

Conclusion

Event Flows brings intelligent context-aware routing to ilert’s alerting stack. With a node-based builder and the ilert Condition Language (ICL) for advanced logic, teams can enrich and route events before they become alerts. Strong reliability guarantees ensure less alert noise, fewer misroutes, and faster, more consistent responses.

Engineering

Webpack Fast Refresh vs Vite: What was Faster for ilert‑ui

A qualitative look at ilert-ui’s local dev: comparing Vite and webpack Fast Refresh to see what truly improves daily DX.

Jan Arnemann
Oct 29, 2025 • 5 min read

This article shares what felt fastest in the day‑to‑day development of ilert‑ui, a large React + TypeScript app with many lazy routes. We first moved off Create React App (CRA) toward modern tooling, trialed Vite for local development, and ultimately landed on webpack‑dev‑server + React Fast Refresh.

Scope: Local development only. Our production builds remain on Webpack. For context, the React team officially sunset CRA on February 14, 2025, and recommends migrating to a framework or a modern build tool such as Vite, Parcel, or RSBuild.

Qualitative field notes from ilert‑ui: We didn’t run formal benchmarks; this is our day‑to‑day experience in a large route‑split app.

Mini‑glossary

A few helpful terms you will encounter in this article.

  • ESM: Native JavaScript module system browsers understand.
  • HMR: Swaps changed code into a running app without a full reload.
  • React Fast Refresh: React’s HMR experience that preserves component state when possible.
  • Lazy route / code‑splitting: Loading route code only when the route is visited.
  • Vendor chunk: A bundle of shared third‑party deps cached across routes.
  • Eager pre‑bundling: Bundling common deps up front to avoid many small requests later.
  • Dependency optimizer (Vite): Pre‑bundles bare imports; may re‑run if new deps are discovered at runtime.
  • Type‑aware ESLint: ESLint that uses TypeScript type info – more accurate, heavier.

Why we left CRA

Problem statement: ilert‑ui outgrew CRA’s convenience defaults as the app matured.

Here are the reasons that pushed us away from CRA:

  • Customization friction: Advanced webpack tweaks (custom loaders, tighter split‑chunks strategy, Babel settings for react-refresh) required ejecting or patching. That slowed iteration on a production‑scale app.
  • Large dependency surface: react-scripts brought many transitive packages. Installs got slower, and security noise grew over time without clear benefits for us.

Goals for the next steps:

  • Keep React + TS.
  • Improve time‑to‑interactive after server start.
  • Preserve state on edits (Fast Refresh behavior) and keep HMR snappy.
  • Maintain predictable first‑visit latency when navigating across many lazy routes.

Why Vite looked like a better solution

During development, Vite serves your source as native ESM and pre‑bundles bare imports from node_modules using esbuild. This usually yields very fast cold starts and responsive HMR.

What we loved immediately

  • Cold starts: Noticeably faster than our CRA baseline.
  • Minimal config, clean DX: Sensible defaults and readable errors.
  • Great HMR in touched areas: Editing within routes already visited felt excellent.

Where the model rubbed against our size

In codebases with many lazy routes, first‑time visits can trigger bursts of ESM requests, and when new deps are discovered at runtime, dependency‑optimizer re‑runs that reload the page. This is expected behavior, but it made cross‑route exploration feel uneven for us.

Qualitative field notes from ilert‑ui

Methodology: qualitative observations from daily development in ilert‑ui.

Our repo’s shape

  • Dozens of lazy routes, several heavy sections pulling in many modules.
  • Hundreds of shared files and deep store imports across features.

What we noticed

  1. First‑time heavy routes: Opening a dependency‑rich route often triggered many ESM requests and sometimes a dep‑optimizer re‑run. Cross‑route exploration across untouched routes felt slower than our webpack setup that eagerly pre‑bundles shared vendors.
  2. Typed ESLint overhead: Running type‑aware ESLint (with parserOptions.project or projectService) in‑process with the dev server added latency during typing. Moving linting out‑of‑process helped, but didn’t fully offset the cost at our scale – an expected trade‑off with typed linting.

TL;DR for our codebase: Vite was fantastic once a route had been touched in the session, but the first visits across many lazy routes were less predictable.

Why we pivoted to webpack‑dev‑server + React Fast Refresh

What we run:

  • webpack‑dev‑server with HMR.
  • React Fast Refresh via @pmmmwh/react-refresh-webpack-plugin and react-refresh in Babel.
  • Webpack SplitChunks for common vendor bundles; filesystem caching; source maps; error overlays; ESLint out‑of‑process.

Why it felt faster end‑to‑end for our team:

  1. Eager vendor pre‑bundling: We explicitly pre‑bundle vendor chunks (React, MUI, MobX, charts, editor, calendar, etc.). The very first load is a bit heavier, but first‑time visits to other routes are faster because shared deps are already cached. SplitChunks makes this predictable.
  2. React Fast Refresh ergonomics: Solid state preservation on edits, reliable error recovery, and overlays we like.
  3. Non‑blocking linting: Typed ESLint runs outside the dev server process, so HMR stays responsive even during large type checks.

Receipts – the knobs we turned

1// webpack.config.js
2module.exports = {
3  optimization: {
4    minimize: false,
5    runtimeChunk: "single",
6    splitChunks: {
7      chunks: "all",
8      cacheGroups: {
9        "react-vendor": {
10             test: /[\/\]node_modules[\/\](react|react-dom|react-router-dom)[\/\]/,
11          name: "react-vendor",
12          chunks: "all",
13          priority: 30,
14        },
15        "mui-vendor": {
16          test: /[\/\]node_modules[\/\](@mui\/material|@mui\/icons-material|@mui\/lab|@mui\/x-date-pickers)[\/\]/,
17          name: "mui-vendor",
18          chunks: "all",
19          priority: 25,
20        },
21        "mobx-vendor": {
22          test: /[\/\]node_modules[\/\](mobx|mobx-react|mobx-utils)[\/\]/,
23          name: "mobx-vendor",
24          chunks: "all",
25          priority: 24,
26        },
27        "utils-vendor": {
28          test: /[\/\]node_modules[\/\](axios|moment|lodash\.debounce|lodash\.isequal)[\/\]/,
29          name: "utils-vendor",
30          chunks: "all",
31          priority: 23,
32        },
33        "ui-vendor": {
34          test: /[\/\]node_modules[\/\](@loadable\/component|react-transition-group|react-window)[\/\]/,
35          name: "ui-vendor",
36          chunks: "all",
37          priority: 22,
38        },
39        "charts-vendor": {
40          test: /[\/\]node_modules[\/\](recharts|reactflow)[\/\]/,
41          name: "charts-vendor",
42          chunks: "all",
43          priority: 21,
44        },
45        "editor-vendor": {
46 test: /[\/\]node_modules[\/\](@monaco-editor\/react|monaco-editor)[\/\]/,
47          name: "editor-vendor",
48          chunks: "all",
49          priority: 20,
50        },
51        "calendar-vendor": {
52          test: /[\/\]node_modules[\/\](@fullcalendar\/core|@fullcalendar\/react|@fullcalendar\/daygrid)[\/\]/,
53          name: "calendar-vendor",
54          chunks: "all",
55          priority: 19,
56        },
57        "vendor": {
58          test: /[\/\]node_modules[\/\]/,
59          name: "vendor",
60          chunks: "all",
61          priority: 10,
62        },
63      },
64    },
65  },
66};

1// vite.config.ts - Vite optimizeDeps includes we tried
2export default defineConfig({
3  optimizeDeps: {
4    include: [
5      "react",
6      "react-dom",
7      "react-router-dom",
8      "@mui/material",
9      "@mui/icons-material",
10      "@mui/lab",
11      "@mui/x-date-pickers",
12      "mobx",
13      "mobx-react",
14      "mobx-utils",
15      "axios",
16   "moment",
17      "lodash.debounce",
18      "lodash.isequal",
19      "@loadable/component",
20      "react-transition-group",
21      "react-window",
22      "recharts",
23      "reactflow",
24      "@monaco-editor/react",
25      "monaco-editor",
26      "@fullcalendar/core",
27      "@fullcalendar/react",
28      "@fullcalendar/daygrid",
29    ],
30    // Force pre-bundling of these dependencies
31    force: true,
32  },
33});
34

Result: Helped some cold starts, but for our repo, didn’t smooth out first‑visit latency across many lazy routes as much as eager vendor chunks in webpack.

What we tried to speed up vite (and what we didn’t)

What we tried in Vite

Run ESLint in a separate process
What it does: Lints in the background instead of blocking the dev server.
Impact: Faster feedback while editing.

Enable a filesystem cache
What it does: Reuses build results across restarts.
Impact: Quicker cold starts and rebuilds.

Pre-bundle third-party code (vendor split)
What it does: Bundles libraries like React once and keeps them separate from app code.
Impact: Less work on every save; snappier HMR.

These tweaks made Vite feel better – but they weren’t enough to solve our bigger performance issues, which is why we evaluated Webpack.

Things we could have tried

More aggressive optimizeDeps tuning
Why we skipped: Can help large projects, but needs careful profiling and ongoing dependency hygiene. The time cost outweighed the likely gains for us.

“Warm crawl” on server start
What it is: A Script that visits routes at startup to pre-load modules and caches.
Why we skipped: Extra complexity and inconsistent payoff in real projects.

Pin versions for linked packages
What it is: Lock versions in a mono-repo to reduce Vite’s re-optimization churn.
Why we skipped: Useful in some setups, but adds maintenance overhead; not worth it before a larger rework.

Pros and cons (in our context)

Vite – pros

  • Blazing cold starts and lightweight config.
  • Excellent HMR within already‑touched routes.
  • Strong plugin ecosystem and modern ESM defaults.

Vite – cons

  • Dep optimizer re‑runs can interrupt flow during first‑time navigation across many lazy routes.
  • Requires careful setup in large monorepos and with linked packages.
  • Typed ESLint in‑process can hurt responsiveness on large projects; better out‑of‑process.

Webpack + Fast Refresh – pros

  • Predictable first‑visit latency across many routes via eager vendor chunks.
  • Fine‑grained control over loaders, plugins, and output.
  • Fast Refresh preserves state and has mature error overlays.

Webpack + Fast Refresh – cons

  • Heavier initial load than Vite’s cold start.
  • More configuration surface to maintain.
  • Historical complexity (mitigated by modern config patterns and caching).

Quick performance tests you can run locally to test your project

These checks are quick, human-scale benchmarks — no profiler required. Use a stopwatch and DevTools to compare real interaction delays and perceived smoothness.

Cold start

How to test: Restart the dev server and measure the time from running npm run dev to the first interactive page load (a stopwatch works fine).

Observation: Vite typically starts faster on a cold boot. Webpack, when using its filesystem cache, remains acceptable once warmed.

First-time heavy route

How to test: Open a dependency-rich route for the first time. Watch DevTools → Network and Console for optimizer runs or request bursts.

Observation: Vite may occasionally trigger re-optimization and reloads. Webpack’s vendor chunks tend to make first visits steadier.

Cross-route navigation

How to test: Navigate through several untouched routes and note responsiveness until each becomes interactive.

Observation: Vite improves after initial loads (as modules are cached). Webpack stays consistently predictable across routes.

Linting impact

How to test: Compare ESLint running in-process versus in a separate process. Measure typing responsiveness and HMR smoothness.

Observation: Running ESLint out-of-process kept the dev server responsive and maintained smooth HMR in both setups.

Balanced guidance – when we would pick each

Choose Vite if:

  • Cold starts dominate your workflow.
  • Your module graph isn’t huge or fragmented into many lazy routes.
  • Plugins – especially typed ESLint – are light or run out‑of‑process.

Choose webpack + Fast Refresh if:

  • Your app benefits from eager vendor pre‑bundling and predictable first‑visit latency across many routes.
  • You want precise control over loaders/plugins and build output.
  • You like Fast Refresh’s state preservation and overlays.

Conclusions

Both Vite and Webpack are excellent. Given ilert‑ui’s current size and navigation patterns, webpack‑dev‑server + React Fast Refresh delivers the tightest feedback loop for us today – based on qualitative developer experience, not micro‑benchmarks. We’ll keep measuring as our codebase evolves and may revisit Vite or a framework as our constraints change.

Explore all
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Our Cookie Policy
We use cookies to improve your experience, analyze site traffic and for marketing. Learn more in our Privacy Policy.
Open Preferences
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.