Introduction
We take a structured approach to monitoring AI voice calls to ensure high quality and reliability. This involves a combination of automated analysis and manual review, allowing us to quickly identify and resolve any issues.
Automated Monitoring
Every AI voice call undergoes a post-call analysis. This process checks for key quality indicators, such as: customer sentiment, booking errors, and caller intents.
Using this data, alongside Spare system logs, we automatically flag calls that may require further investigation.
Manual Monitoring
In addition to automation, our team conducts manual reviews in three key scenarios:
Post-launch monitoring For each new AI voice launch, we review the first 200 calls to confirm that the system is performing as expected with the client's setup.
Ongoing monitoring As part of continuous quality control, we manually review approximately 3% of random calls, on top of other reviews. This ensures consistency, identifies errors, and validates overall AI performance.
Targeted monitoring of flagged calls When our automated systems detect potential issues, we conduct manual reviews of those specific calls to confirm and diagnose any problems.
Post-Monitoring Actions
When issues are identified, we first assess their scope and impact. Based on this assessment, we prioritize fixes to ensure the most critical improvements are addressed quickly. Our dedicated development team continuously works on these enhancements while also incorporating client feedback.
