27 October, 2025

Criteria

You can know refine AutoQA instructions in a more targeted way. Once you have AutoQA corrections, you can select the corrections, based on which, you'd like to refine the instructions and get a summary and action points of what was changed and why.

Reporting

We have multiple updated in the AutoQA reporting.

On the Accuracy page, we've added a new metric called "Uncalibrated scores". This refers to the number of corrected scores where AutoQA and Evaluator scoring are not aligned and require calibration. This is cases where:

  • The AutoQA is missing instructions

  • The evaluator contradicts criterion

  • The evaluator contradicts KB

  • The evaluator has done mistakes

We've also added a new drop-down which you can use to filter by AutoQA reviewer or the evaluator who did the co-pilot evaluations.

We've also added a new table that shows the accuracy by AutoQA reviewer or evaluator, and both tables now support sorting by column. This allows you to easily focus on poor performing criteria, in terms of accuracy, or members of your team who are not aligned with AutoQA scoring.

On the Evaluation scores page, you can also find a new drop-down to filter scores by AutoQA reviewer or evaluator. We've also added two new columns, "Reviewer of AutoQA" and "Uncalibrated score reason".

Finally, we've updated the Evaluators Assignments page to show activity for all relevant users and fixed minor bugs on the drop-downs.

Training

We've done various fixes and improvements in the flow for call simulations and adjustments on the scenario generation process for simulations to allow for more variability and less predictability in the AI customer flow and script.

You can now attach images on the simulations chat simply by copy pasting.

Sampling rules

For all date fields, you can now select shorter relative time intervals, including 15, 30 and 60 minutes.

Tickets

All user can know use the "bookmark" functionality to save tickets.

Last updated

Was this helpful?