How AI Analytics Came Up Short
The main selling point of AI automated QA programs is their claim to measure 100% of agent recordings. This is sold as unquestionably superior to what is implied are old-fashioned call center QA programs that score only a few calls per agent per month.
And while this provides the perfect marketing messaging (who could argue against 100% scoring?), the premise is entirely wrong. The amount of scoring is the least relevant and least impactful component of QA and training.
While it is clear contact centers are often dissatisfied with the results of their internal QA, the number of measurements is not the reason. The number of calls contact centers traditionally score is the exact right amount.
The problem is the performance parameters.
Performance parameters are what dictate agent behavior while the amount of scoring is incidental. Traditional parameters such as opening, asked probing questions, agent politeness, conveyed warmth, service quality, built rapport, solved issues, actively listened, closing, etc. are far too ambiguous to effectively teach or manage agent performance.
Ineffective performance parameters are the reason QA so often fails, and no amount of increased measurements makes any difference.
And these are the exact types of parameters automated AI QA programs use in their measurements. Using such parameters, measuring 100% of calls has no different effect on agent behavior than measuring a few calls. The parameters are ineffectual and do not significantly impact agent performance either way.
Analytics points to the traditional QA approach as old-fashioned and ineffective while producing the same traditional program; just an automated version of it.
BCI's proprietary performance curriculum and groundbreaking scoring system combine to produce the most contemporary and effective training product in the call center industry. We teach agents how to deliver flawless customer service and hold them accountable for executing it in every call they handle each day.