For 911 centers, Quality Assurance (QA) has never been optional. It’s how leaders protect call quality, reinforce standards, reduce risk, and support the people making life-or-death decisions every day. It’s also what allows centers to move beyond surface-level metrics and understand how calls are actually being handled: where performance is strong, where it’s inconsistent, and where coaching is needed.
That distinction matters more than ever. As one experienced 911 industry professional put it, “speed and volume are critical, but they don’t tell the whole story.” A center can answer calls quickly and still miss deeper issues in protocol adherence, dispatch coordination, or caller care. Leaders may know how many calls came in and how fast they were answered, but that alone does not provide a full picture of performance. Quality Assurance closes this gap.
And yet, in many 911 centers, QA is also one of the first things to fall through the cracks.
The Reality: Critical, But Hard to Sustain
The issue is not a lack of awareness or commitment. Rather, it’s that in its traditional form, QA is difficult to sustain.
In practice, it’s largely manual. Supervisors and evaluators spend hours locating calls, reconstructing incidents, listening to recordings in real time, scoring performance, and documenting results. Even in well-resourced centers, it can take 30 minutes to more than an hour to fully evaluate a single call. When that level of effort is multiplied across an entire operation, the process quickly becomes unsustainable.
The data reinforces this reality. Recent benchmark research shows that more than three-quarters of 911 centers still perform QA manually. That statistic is telling, not because it reflects resistance to change, but because it highlights the operational burden that QA places on already stretched teams. The intent to do more is there, but the capacity simply isn’t.
This is the contradiction at the heart of QA in 911 today: it is critically important, but operationally difficult to maintain at scale.
Where the Model Breaks Down
When QA becomes too manual, something inevitably gives.
Supervisors are already responsible for staffing, real-time operational support, escalations, training oversight, wellness, and retention. Adding hours of QA work often means something else gets deprioritized. In many centers, that “something” is coaching.
That tradeoff has real consequences. Telecommunicators are handling urgency, ambiguity, and emotional intensity all at once. Small differences in how a call is handled can have outsized impacts on outcomes. Small coaching moments can too. But in a manual QA model, coaching is built on a narrow and delayed view of performance.
A telecommunicator may handle hundreds of calls in a month, yet feedback may be based on just a handful. Those calls may not reflect overall performance, and by the time coaching occurs, the moment has passed. Details are less clear, context has faded, and the opportunity to meaningfully influence behavior has diminished. Over time, this dynamic can erode the effectiveness of QA, turning what should be a developmental tool into a compliance exercise.
The Limits of Sampling
The APCO/NENA sampling model provides an important and necessary foundation for QA. It ensures that calls are reviewed consistently and that a baseline level of oversight is maintained.
But it was never intended to be the full picture. As one industry expert explained, those standards are “absolutely appropriate as a baseline,” but they do not capture everything happening beyond that small percentage of calls.
In a manual system, sampling is unavoidable, yet it comes at a cost. When only a fraction of calls are reviewed, critical moments and performance trends can be missed. Strong performance can go unrecognized, while recurring issues may remain hidden until they become more serious.
In high-volume environments, this limitation becomes even more pronounced. The vast majority of interactions are never formally evaluated, creating a gap between what leaders know is necessary and what is realistically possible. That gap is not due to a lack of understanding. It is due to a lack of scalable tools.
What Changes When AI Enters the Picture
This is where AI begins to change the conversation in a meaningful way.
The value of AI in QA is not simply that it reduces workload, although it certainly does. Its real impact is that it removes the constraints that have historically limited what QA can achieve. AI can automatically transcribe calls, generate summaries, surface key moments, and analyze interactions at scale. It can apply QA frameworks consistently, pre-score evaluations, and highlight where key steps were met or missed, all before a supervisor begins their review.
As one practitioner described it, AI is particularly effective at “high-volume, rules-based tasks,” while humans remain essential for judgment, context, and care. That distinction is important, because it reflects the role AI is actually playing—not as a replacement for human expertise, but as a force multiplier.
Instead of starting every evaluation from scratch, supervisors begin with structure and insight already in place. They can focus less on gathering information and more on understanding it. That shift changes not just the speed of QA, but its quality.
From Limited Visibility to Meaningful Insight
The most significant change AI brings is visibility.
Instead of reviewing a small sample of calls, centers can begin to see across a much broader portion of their operation. Instead of asking which calls they have time to review, leaders can focus on which calls most need their attention. High-acuity incidents, performance outliers, repeated call types, and high-stress interactions can all be surfaced more easily and reviewed more consistently.
This aligns with what many in the field are now describing as a more modern approach to QA, one that extends beyond compliance-driven sampling to include incident-driven, performance-driven, and even wellness-focused review. It becomes possible not only to evaluate what happened, but to understand patterns across calls, teams, and time.
That shift allows QA to move from a retrospective process into something far more proactive and operationally relevant.
Turning Insight Into Coaching
However, visibility alone does not improve performance. What really matters is
what leaders do with it.This is where AI has an equally important impact on coaching. Quality Assurance only creates value when it leads to action, and one of the longstanding challenges in 911 has been translating insight into consistent, timely coaching.
AI helps close that gap. It does not simply flag issues; it provides context. It highlights specific behaviors, identifies areas for improvement, and can generate coaching suggestions tied directly to real interactions. Instead of forcing supervisors to build every coaching conversation from scratch, it gives them a structured starting point.
This makes a meaningful difference. Feedback becomes more timely, more representative, and more grounded in actual performance. It also becomes more actionable. Telecommunicators can better understand what needs to change and why, and they can address small issues before they become persistent habits.
Research supports this shift as well. More than half of surveyed centers in a recent
Benchmark Report said that AI-generated coaching recommendations are an important capability, reinforcing the idea that the value of QA lies not just in identifying issues, but in enabling better coaching outcomes.
AI Doesn’t Replace People. It Empowers Them
None of this works without the human element.
There is a clear consensus across the industry that AI should assist, not replace. As one expert put it, AI is best suited for data-heavy analysis, while humans bring the judgment, trust, and context required to interpret and act on those insights.
That balance is critical in a high-stakes environment like 911.
AI can process vast amounts of data, identify patterns, and surface insights that would otherwise go unnoticed. But it cannot determine how those insights should be applied. It cannot build trust with a telecommunicator or deliver feedback in a way that supports improvement. That remains the role of the supervisor.
The most effective model is a hybrid one, where AI handles the heavy lifting and humans remain firmly in control of evaluation, coaching, and decision-making. In that model, AI enhances leadership rather than replacing it.
A Better Model for QA, and for Leadership
The real story here is not just about technology. It is about
what Quality Assurance can become when it is no longer constrained by manual effort.
Instead of a limited, retrospective process, QA becomes continuous, visible, and actionable. It becomes a tool not just for compliance, but for coaching, performance improvement, and operational awareness. Leaders gain the ability to see more clearly, act more quickly, and support their teams more effectively.
Perhaps most importantly, it gives supervisors back time: time to coach, time to mentor, and time to lead.
In a profession where every second matters, that shift is significant. Because in the end, Quality Assurance was never meant to be a box-checking exercise. It was meant to strengthen performance, reduce risk, and support the people communities rely on when everything is on the line.
AI does not change that purpose.
It makes it possible to achieve it.