AI-first technology for modern teams with fast response times
ilert is the AI-first incident management platform designed from the ground up as a single application and covers the entire incident response lifecycle.
Share your scheduling needs in a simple, chat-like interface. Add team members, rotation rules, and timeframes — and get a ready-to-use on-call calendar everyone can access.
Let AI take the call
Introducing the ilert AI Voice Agent—your first responder for calls, gathering key details and informing your on-call engineers.
Status updates in no time
ilert AI analyzes your system and incidents, offering quick updates and managing communications for efficient issue resolution.
ilert Responder – your real-time incident advisor
ilert Responder is an intelligent agent that analyzes incidents in real time. It connects to your observability stack, investigates alerts across systems, and surfaces actionable insights, without taking control away from your team.
Features
Analyze logs, metrics, and recent changes autonomously
Identify root causes and similar past incidents
Suggest responders, rollback paths, or related service
Ask questions in natural language and get direct, evidence-backed answers
Integrations
Get started immediately using our integrations
ilert seamlessly connects with your tools using our pre-built integrations or via email. ilert integrates with monitoring, ticketing, chat, and collaboration tools.
See how industry leaders achieve 99.9% uptime with ilert
Organizations worldwide trust ilert to streamline incident management, enhance reliability, and minimize downtime. Read what our customers have to say about their experience with our platform.
We’re excited to announce that ilert now offers a native integration with Livewatch, unlocking seamless incident escalation from monitoring to response. Starting today, all alerts generated by Livewatch can be automatically ingested, grouped, escalated, and managed from within ilert – closing the loop between detection and resolution.
What is Livewatch?
Livewatch is a German-based server and website monitoring solution that helps you track uptime, detect performance deviations, and get notified the moment something goes wrong.
Key facts about Livewatch:
Founded in 2006, with nearly two decades of experience in monitoring services.
Their infrastructure is globally distributed, enabling checks from multiple regions for improved reliability.
They offer a pay-per-use billing model (with optional subscription enhancements) so you only pay for the checks and alerting you actually use.
Their dashboard supports configuration of multiple “check types” (ping, HTTP, content checks, etc.) and stores historical results for reporting and trend analysis.
Because Livewatch is lightweight, flexible, and cost-efficient, many operations teams in Germany and beyond use it as their go-to uptime monitoring tool.
Why this integration matters
A monitoring alert is only as valuable as the process that follows. With the new propagation between Livewatch → ilert, you get:
Automated alert escalation: Alerts fired in Livewatch are sent into alert, so you don’t have to manually act on emails or dashboards.
Smart alert grouping & deduplication: You can configure how alerts from Livewatch are grouped in ilert to avoid noise and reduce alert fatigue.
On-call & escalation policies in play: The right people get alerted based on schedules, escalation rules, and fallback rules.
Mutual resolution: When Livewatch sends a “recover” event (i.e., state transitions back to OK), ilert will automatically resolve the corresponding alert.
Centralized incident visibility: Your entire incident response workflow–alerts, acknowledgments, escalations, status updates–lives in ilert, even those triggered by Livewatch.
Faster response times: By cutting out manual handoffs, your team can detect, triage, and respond faster.
Together, ilert and Livewatch make operations tighter, faster, and more reliable.
How to set it up
Configuration takes just a few minutes in both tools.
Find Livewatch among alert sources in ilert and add it. After following a setup guide, you will receive a unique URL that you will need later.
In Livewatch, navigate to Contacts and find ilert there. Paste the previously generated integration URL from ilert into the respective field.
ilert now offers a native integration with Apica that connects telemetry events to ilert’s alerting, on-call, and incident communication. It helps SRE, DevOps, and IT operations teams turn detection into action faster, reduce alert noise with the aid of AI, and keep stakeholders informed without unnecessary notifications.
Highway to faster development
Mistakes are inevitable if you want to move fast and expand your product. But they shouldn't slow you down. Apica and ilert provide tools to make product changes less stressful and IT incidents manageable. Two solutions prepare you for unpredictable events and help you meet unexpectedness with helpful tools at hand.
The fastest way to reduce time from detection to resolution is to shorten the path from signal to the right human with the right context. Apica detects performance and availability issues across websites, apps, APIs, and more; ilert turns those signals into alerts tied to escalation policies and teams, so responders see ownership and next steps immediately. That means fewer handoffs, quicker acknowledgment, and fewer minutes lost before mitigation starts.
Add ilert AI on top to group similar events by content similarity and deduplicate replications, and your on-call stays focused on the primary incident rather than clearing look-alikes. When a failing Apica check returns to OK, ilert automatically resolves the linked Alert, preventing stale notifications from lingering.
Apica capabilities and strengths
Apica Ascent offers a modular approach to telemetry data management. It includes four four products: Fleet for agent management, Flow for telemetry pipelines, Lake for storage, and Observe for analytics.
Fleet deploys and manages OpenTelemetry and Fluent Bit collectors at scale, making it straightforward to start streaming logs and metrics.
Flow gives “never block, never drop” pipeline control with InstaStore-backed infinite buffering, real-time transform/enrich/route, elastic Kubernetes-native scaling, and 200+ integrations with existing stacks like Datadog, Elastic, Kafka, and S3.
Lake is a single-tier, object-storage data lake with patented InstaStore for indexed, on-demand access and long-term retention.
Observe correlates logs, metrics, traces, events, and web performance in one view and adds automatic anomaly detection, root-cause analysis, dashboards, alerting, and reporting.
Apica can reduce observability spend by up to 40% by decoupling compute from storage, supporting any object store, and letting teams choose what to index and when. The platform is ISO 27001 and SOC 2 certified and supports SaaS, hybrid, and on-prem deployments.
Integration features
Here are the capabilities of the native Apica and ilert integration that users will benefit from:
Native triggering: Apica issues/outages create alerts in ilert via a dedicated Apica alert source; setup is point-and-click on both sides.
Auto-resolve: When an Apica alert returns to OK, the linked ilert alert is resolved automatically.
On-call routing and escalations: Choose an escalation policy during setup so Apica-originated alerts page the right on-call and follow your escalation rules.
Noise reduction with intelligent grouping: enable alert grouping and filtering with the help of ilert AI to collapse near-duplicates and concentrate only on what matters.
Event flows for enrichment and control: use visual ilert Event flows to branch on conditions (e.g., severity/support hours), route, or suppress Apica events before they page.
How to use the integration
To start using Apica and ilert, you need to have accounts. Here are the registration links. Both solutions offer Free trials and plans:
To start sending Apica events to ilert, navigate to the Alert sources menu at ilert and choose the Apica tile. The connection is straightforward and takes no more than five minutes. Find the step-by-step guide in the ilert Documentation.
We are happy to help
If you have any remaining questions, please don't hesitate to reach out to ilert or Apica's team.
The ilert AI Voice Agent is designed to transform how on-call engineers handle urgent calls. Instead of waking engineers at 3 a.m. with minimal context, the AI Voice Agent collects essential details first and routes calls intelligently based on relevant, up-to-date information.
The agent works hand in hand with ilert’s Call Flow Builder – a visual tool that lets users design custom call flows by connecting configurable nodes. Each node represents a step in the call handling process, and the AI Voice Agent is one such node.
This means you can drop the AI into exactly the right place in your call handling logic, making the process seamless and highly customizable.
In this article, we’ll explore the problem it solves, its construction, how it delivers natural and context-aware conversations, and how we ensure it remains secure and reliable in production.
Beta Notice: The ilert AI Voice Agent is currently available in Beta. Users with the Call Flow Builder add-on can request early access by contacting support@ilert.com.
The problem we’re solving, and why it matters for on-call engineers
On-call engineers often receive urgent calls with minimal context, forcing them to ask repetitive questions before they can take action. This wastes valuable time in high-pressure situations.
The ilert AI Voice Agent addresses this by:
Saving time: The AI collects key details before an engineer is called, allowing them to start troubleshooting immediately instead of asking basic qualifying questions. It also reduces unnecessary escalations by checking for open incidents and informing callers if the issue is already being handled.
Visual call flow integration: Add AI Voice Agent nodes directly into your call flow with an easy-to-use interface, so it becomes part of your existing logic without manual workarounds.
Customizable information gathering: Define exactly what data is collected, such as caller name, contact number, email, incident description, affected services, or custom fields.
Architecture: How the ilert AI Voice Agent works
Under the hood, the AI Voice Agent is designed for modular, configurable interactions with low latency.
Key components:
WebSockets – Provide a low-latency channel for conversational AI with OpenAI.
Twilio integration – Streams live audio to and from callers.
Visual flow builder – Configure AI Voice Agent nodes directly in the Call Flow Builder.
Modular configuration:
Intents – Pre-built or custom, define how calls are routed based on the caller's purpose.
Gathers – Structured data collection (e.g., contact details, incident descriptions).
Enrichment – Optionally pull data from configured sources such as ilert Status Pages, service states, open incidents, or active maintenance windows.
Audio messages – Fully customizable greetings and prompts.
Fallback handling – A “catch-all” branch for unmatched conversations.
During the development of the AI Voice Agent, the team faced several complex technical challenges.
One of the first hurdles was tracking who was speaking at any given time. Both Twilio and OpenAI send speaker events, and the system needed to reliably determine whether the bot or the user was speaking in real time. This was essential to avoid interruptions or missed messages during a conversation.
Another major challenge was ensuring a natural conversation flow. Creating smooth, human-like interactions required extensive prompt engineering and fine-tuning. The pacing, tone, and responsiveness of the AI had to be carefully controlled to make the experience feel intuitive and engaging for users.
Finally, synchronizing multi-stream connections proved to be a critical task. The system had to maintain accurate state information between Twilio streams, OpenAI responses, and ilert’s backend. This synchronization was vital for preserving context consistency throughout the conversation.
Making conversations natural, accurate, and context-aware
The Voice Agent goes beyond traditional voice menus by combining intent recognition with optional context enrichment.
With configurable context enrichment, the agent receives intents, gathers potential follow-up nodes, and captures the caller’s number during call initialization. If enrichment is enabled, it can also access additional data, such as open incidents, current service states, and active maintenance windows. This allows the agent to provide more relevant and timely responses.
Through intent-based routing, the system matches the caller’s intent to the appropriate branch of the call flow, enabling faster and more accurate resolution of requests.
Security, compliance, and observability in production
Reliability and compliance are built in from the start. Here are three major principles:
Stateless design: No persistent storage of caller data between requests.
System prompts with operational rules: The AI follows strict, pre-defined guidelines to ensure security and consistent responses.
Detailed call logging: Logs all call events for troubleshooting and performance review.
Lessons learned
During development and early Beta testing, we learned a great deal about delivering smooth, reliable AI-powered conversations. Allowing the AI to be interrupted by the user turned out to be a key feature – many callers prefer to skip the rest of a question or add details they forgot earlier.
However, this made it even more important to track who is speaking at any given time. By monitoring speaker activity, we can detect long periods of silence and prevent calls from running indefinitely when no one is talking.
Coordinating multiple live connections (Twilio, OpenAI, ilert backend) still required careful orchestration to ensure the call state stayed synchronised at all times. Prompt engineering proved essential in making conversations sound natural while ensuring the AI followed operational rules and safety guidelines.
What’s next?
The Beta release has already sparked new ideas for improvements. We plan to extend logging capabilities and provide full recordings of conversations for review and compliance purposes.
To improve flexibility, the AI Voice Agent will gain adjustable speaking speed and verbosity settings, allowing teams to fine-tune the interaction style. We are also exploring ways to detect when callers are frustrated and offer them an immediate option to speak with a human operator.
On the transcription side, we aim to enhance the ilert user experience by moving from Twilio’s built-in transcription to AI-powered voice transcription. This will provide more accurate and context-aware briefings for on-call engineers before a call is connected.
Conclusion
The ilert AI Voice Agent bridges the gap between urgent incident calls and the actionable details engineers need to respond quickly. By integrating directly with ilert’s incident management platform, it delivers natural, context-aware, and secure conversations while giving teams the flexibility to adapt the interaction to their workflows.
With upcoming features such as multilingual support, transcripts, and deeper integrations, the Voice Agent will further reduce on-call friction and accelerate incident response.