Best practices

Now that you know what information AutoQA has access to and what output you should expect, the last step is to follow a few good practices when you set up your criteria and the instructions, so you can make the most out of it.

Instructions

AutoQA only uses the fields provided in the AutoQA section of the criterion form. Instructions for evaluators in the Criterion set-up part of the form will be ignored.

Instructions usually consist of 3 parts:

  • Checks: This is a description of the check the AutoQA should carry out. If multiple checks required, they can be listed in bullet points. Below each check, you should outline, in sub-bullet points, any additional context relevant to the check, e.g. exceptions to the rule, description of definitions.

  • Notes or additional context: If there is additional context that applies to all the checks, it should be outlined after the checks. If the size of it is small, it can be added directly here. For text of longer size or tables, we recommend adding it to a knowledge base article which you link to the criterion.

  • Scoring methodology: This outlines how reviewee mistakes are translated to the final rating. You should use the name or the score of the rating that applies in each case. There are no restrictions to the methodology you want to apply, however, we typically see two patterns:

    • Each mistake type is mapped to a certain score: Failure for a certain check will correspond to a certain score. In case of multiple mistake types, the lowest score will be selected. In this case, you can add the scoring methodology under each check in a bullet point.

    • Different mistake types and number contribute to the score: Here, the final score is collectively decided based on all the mistake types and the number of mistakes. Each mistake type can have the same or different weight.

Example

  • Checks: The checks include the first and next response time SLAs. Here, they are mentioned together, however, we could have used one bullet point for each instead.

  • Notes and context that apply to all checks: First and next response time SLAs are ticket group specific, so we add this information after the checks for AutoQA to know what SLAs it should check for. If we had 10 groups, then this information could also be in a knowledge base article linked to the criterion.

Scoring methodology: Most likely, if there are no mistakes or a few mistakes under a certain threshold, the highest score is typically used. In this case, mistakes for first or next response times have different weights:

  • A mistake for first response time result in a full markdown.,

  • Mistakes for next response times can result either in a half or full markdown, depending on how many they are.

Naming conventions

In addition to the above, we recommend the following naming conventions:

  • Refer to the agent you want to evaluate as reviewee instead of "agent", "associate", "analyst". This is to ensure AutoQA scores the right user in case of multiple agents being involved in a ticket.

  • Refer to the end user as customer instead of "user", "merchant", "partner", etc.

  • If you refer to ratings, make sure you use the correct names and scores.

  • Use Intryc-specific terminology:

    • For ticket comments, use either "message" or "internal note".

    • Refer to standard ticket fields, i.e. source or group, as "ticket source" or "ticket group".

    • Refer to ticket tags or custom ticket fields as "ticket tag {{ name }}" or "ticket field {{ name }}".

    • For ticket field events, be more explicit. For example, if you want to check if the reviewee closed the ticket within five minutes from the last message, you should write "Check that the reviewee updated the ticket status value to 'closed' within five (5) minutes form the last message." .

Knowledge base articles

In case you want to evaluate criteria that require a lot more context, e.g. the reviewee carried out the right investigation process for a certain customer query or provided the correct solution or information for a given customer issue, you will need to link the corresponding Knowledge Base articles that contain this information to the criterion.

This can be done directly in the AutoQA section of the criterion or from the Knowledge Base.

Last updated

Was this helpful?