Best Practices
Start with the Foundation
Define your quality dimensions: Identify 5-10 key areas you want to evaluate (e.g., compliance, empathy, problem-solving)
Create criteria for each dimension: Start with broad criteria, then refine based on data
Set up custom fields: Identify business metadata you need to track (product areas, issue types, markets)
Build your first scorecard: Combine criteria into a complete evaluation template
Iterate Based on Data
Run initial evaluations: Manually evaluate 50-100 tickets to establish a baseline
Enable AutoQA: Turn on AI evaluation for criteria with clear, objective checks
Refine instructions: Use the Refine button and manual corrections to improve AutoQA accuracy
Adjust pass rates: Based on actual performance data, raise or lower pass rates to match your goals
Keep It Maintainable
Limit the number of criteria: 8-15 criteria per scorecard is typically sufficient
Reuse criteria across scorecards: Don't duplicate criteria unnecessarily
Document your decisions: Use the description fields to explain why criteria or scorecards exist
Regular reviews: Quarterly, review criteria relevance and update as processes change
Leverage AutoQA Effectively
Start with manual evaluation: Understand what good looks like before automating
Reference knowledge base articles: Link policies and procedures to improve AI accuracy
Be specific in instructions: The more detailed your AutoQA instructions, the better the results
Monitor and correct: Regularly review AutoQA outputs and correct errors to improve the system
Support Your Team
Clear evaluator instructions: Even with AutoQA, human evaluators need guidance
Training on criteria: Ensure evaluators understand what each criterion means
Consistent standards: Use root causes and knowledge base articles to maintain consistency
Transparent communication: Share scorecard changes and the reasoning behind them
Last updated
Was this helpful?

