AutoQA Tools

AutoQA tools let AutoQA fetch extra data during an evaluation. Use them when the ticket evaluation requires additional data to score a criterion reliably.

Setting up an AutoQA tool requires three parts:

  • Clear tool name

  • Precise AI instructions

  • Precise parameter definition

After setting up the tools, you will need to go to the criteria which require these tools to:

  • Link them on the AutoQA configuration.

  • Update the criterion AutoQA instructions to explain to the AI when and how to use each linked tool.

To create a new AutoQA tool, go to Tools > AutoQA Tools and click on "Create AutoQA Tool".

Creating AutoQA tools

Use the guidance below to fill in the form to set up your AutoQA tools.

A good tool:

  • Solves one clear job

  • Returns fields that matter to a criterion

  • Uses parameters that can be derived from the ticket or another tool

  • Avoids extra data that does not affect the evaluation

Tool name

Use a clear tool name. We recommend a verb + object [+ qualifier] pattern. The name should tell you exactly what the tool does.

Good examples:

  • Get customer info by email

  • Fetch agent profile by email

  • List open issues by customer ID

Avoid vague names:

  • Customer tool

  • API call

  • User endpoint

circle-info

If two tools sound interchangeable, the names are not specific enough.

AI instructions

Tool AI instructions should explain more than the action. They should include:

  • When the tool is useful.

  • What input it needs and where is this input coming from.

  • What fields the response contains.

  • Which fields are likely to matter in an evaluation and what do these fields mean.

Good pattern:

Use this tool to retrieve customer details using the customer's email address from the ticket. The response includes the customer ID, full name, email, role, and subscription status. Use it when a criterion depends on the customer's identity or account standing.

Weak pattern:

Looks up a customer.

The more explicit the response fields are, the easier it is for AutoQA to extract the right data.

Parameters

Each parameter description should say exactly what the parameter is about and where its value comes from. Common parameter sources:

  • The ticket body

  • Ticket metadata

  • A previous tool response

Examples:

  • email: The customer's email address, as found in the ticket.

  • agent_email: The agent's email address, as found in ticket metadata.

  • user_id: The customer's internal ID, retrieved from the Get customer info by email tool.

This matters most for chained calls. If one tool depends on another tool's output, make that dependency explicit.

Update criterion to use AutoQA tools

The first thing you will need to do for the AI to use your AutoQA tools is to link the required tools to the relevant criteria. Go to the AutoQA section of each criterion and under "Linked AutoQA tools", selects the relevant tools.

Next, you will need to update the AutoQA instructions and describe when the AI should use each tool, in what order (in case there are sequential dependencies), what inputs should be used with each tool, and how to use the tool responses. The instructions should mainly complement the tools and not repeat their descriptions.

Specifically, treat criterion instructions like a short data-gathering recipe:

  1. List which tools to call.

  2. Specify the order only when there is a dependency.

  3. Say which ticket field supplies each parameter.

  4. Say which response fields matter for the evaluation.

Good criterion instructions answer:

  • What should AutoQA read from the ticket?

  • Which tools should it call?

  • Which fields should it inspect in the response?

  • How should those fields affect the evaluation?

Example

Use case: evaluate whether an agent handled a customer with unresolved issues appropriately.

Configured tools:

  • Get customer info by email returns user_id, name, and subscription_status

  • Get agent profile by email returns role and experience_level

  • List open issues by user ID returns the customer's open and past issues

Criterion name: Agent appropriately acknowledged customer's open issues

Criterion instructions:

  1. Read the customer's email from the ticket.

  2. Call Get customer info by email.

  3. Read user_id from the response.

  4. Call List open issues by user ID with that user_id.

  5. Read the agent's email from ticket metadata.

  6. Call Get agent profile by email.

  7. Use the open issues, agent role, and experience level to evaluate whether the agent acknowledged the customer's unresolved issues and adapted their response appropriately.

In this example:

  • Steps 2 and 4 must happen in sequence

  • Step 6 can run independently once the agent email is available

  • The evaluation depends on explicit fields from each response

Common mistakes

Avoid these setup problems:

  • Adding tools with vague names,

  • Writing instructions that describe only the action,

  • Omitting where a parameter comes from,

  • Repeating tool descriptions inside criterion instructions,

  • Chaining tools without naming the dependency field,

  • Adding tools whose required parameters cannot be derived,

Troubleshooting

If a tool is not working as expected, check these first:

  • Tool is not called: The parameter source is unclear.

  • Wrong tool is chosen: The tool name is too broad.

  • Weak evaluation output: The instructions do not describe the important response fields.

  • Chained call fails: The previous response field is not named explicitly.

Rules of thumb

Keep these rules in mind:

  • If two tools do not depend on each other, AutoQA can call them in parallel.

  • If one tool depends on another, make the dependency explicit.

  • Up to 5 sequential dependency levels are supported.

  • If a required parameter cannot be derived from the ticket or a prior response, the tool may not belong in that criterion.

Last updated

Was this helpful?