Genesys Cloud – AI-driven quality management scoring and assessment updates
| Announced on (YYYY-MM-DD) | Effective date (YYYY-MM-DD) | Aha! idea |
|---|---|---|
| 2026-01-12 | - | - |
In a future release, Genesys Cloud will update the large language models (LLMs) that power Genesys Virtual Supervisor’s AI scoring and assessment capabilities.
The update introduces newer, fine-tuned models that change how AI scoring interprets and reasons over quality evaluation questions. In internal testing, the updated models have shown potential for improved consistency and broader question coverage compared with the models used today. As a result, customers may observe positive differences in how evaluation questions are answered and scored.
What will change
- Updated interpretation of evaluation questions – AI scoring can better interpret evaluation questions, which will lead to more accurate answers being generated.
- Changes in AI-generated answers and scores – Quality analysts will see improved alignment between AI-generated results and expected evaluation outcomes.
What existing AI scoring customers should expect
Because the updated models process evaluation criteria and interaction evidence differently, AI-generated results may change after the update. These differences do not necessarily indicate errors, but rather reflect changes in how the models interpret questions and supporting information.
To help maintain confidence in AI-assisted scoring, Genesys recommends that existing AI scoring customers:
- Review evaluation forms to ensure that question wording, answer options, and scoring logic clearly reflect intended quality criteria.
- Validate AI-generated results following the update, especially for high-impact or compliance-related questions.
- Adjust evaluation forms if needed to maintain alignment with business objectives and quality standards.
These steps can help teams understand updated model behavior and ensure that AI-assisted scoring continues to support quality evaluation needs.
Why this change is being made
- Model modernization – Keeps AI scoring aligned with ongoing advancements in large language model technology.
- Improved consistency over time – Fine-tuned models are intended to reduce variability across similar evaluations.
Who benefits
- Supervisors – Continue to receive automated scoring support for reviewing behavioral and compliance criteria.
- Quality analysts – Gain updated AI capabilities that may reduce manual effort, while retaining the ability to review, validate, and adjust results.
- Contact center managers – Maintain access to AI-assisted quality insights for trend analysis and operational decision-making.
[NEXT] Was this article helpful?
Get user feedback about articles.