Understanding Small Sample Size


Oh, poor small sample size. Defenseless against the big dogs of numerical logic. After all, how could anyone defend small sample sizes versus larger sample sizes? More is always better, right?


Well, actually small samples sizes are routinely more effective than bigger sample sizes, and it is all very logical.


(Note: The term sample size refers to the number of times an organization measures each of their agents' performances per month. Some do 3, some do 10, some use software that "measures" every call).


The purpose of sampling an agent's performance is to provide a training opportunity. Sampling is training, and its purpose is to maintain a standard or affect specific changes.


. . . . .


To illustrate how small sample sizes work, I will use the small dog in the picture above; my 3 year old Yorkie (who has quite the personality)- Mia.


Let's suppose I hire two trainers for Mia. The first, Peter offers 60 training visits (samples) per month- 2 per day for 30 minutes. His training curriculum consists of teaching Mia to shake, sit and come when called upon. Sure enough, by the 3rd visit Mia has it all down and by the 60th, Mia can shake, sit and come when called upon better than any dog in the world.


Up next is Janice. She offers only 3 visits (or samples) per month of 2 hours each.


Janice's training curriculum consists of teaching Mia the following: shake, sit, come, fetch, roll over, heel, release, bring it, ring a bell, down, stay, leave it, take it, bark, dance and bring Daddy a bag of potato chips. Sure enough, by the end of the third session, Mia has all 16 of the commands down perfectly.


Clear winner: 3 sample Janice. Not the winner: 60 sample Peter.


Lesson: The quality and effectiveness of sampling is not determined by the number of samples and is entirely determined by the training content of the samples.


Typically, training or software programs that tout large numbers of samples also provide little or no training content. They sample everything, teach little and have practically no effect on agent performance.


. . . . .


But the obvious question that still lingers is, "Even if the training content is superior, how can you be sure a person will consistently execute it if you sample their performance only 3 times per month?"


. . . . . . .


To answer that question I will now explain perhaps the most overlooked aspects of understanding how call center agents operate:


Our second fictional case study focuses on Carol, a scheduler for San Mateo Hospital in Jordan Lakes, Florida where she has worked for 8 years.


In a typical day, Carol schedules 23 Mammograms, 3 Bone Densities, 4 EEGs, 3 Physical Therapies (including Vestibular and Pelvic Floors) and 10 Sleep Tests.


As with any occupation of continuous repetition, Carol has a routine. She handles every scheduling call (which are 95% of her calls) fundamentally the same exact way. She has it down, she has her style, and she handles each of her calls in the same exact manner.


If a manager were to pull either 3 or 50 of Carol’s recordings within a month period and apply a specific performance measurement to all of them, the outcomes would be the same either way. It is simply unnecessary to sample an agents performance more than 3 times per month, and this is a fact I have learned from closely studying agent behavior for 25 years.


. . . . .


To summarize: sampling is training, and the quality and effectiveness of any training program is based on its training content, not its numbers of samples. Furthermore, agents handle each of their calls in the same manner, with only slight and insignificant variation, so a 3 sample survey of their typical calls, spread out over the course of a month, offers a very accurate representation of their performance as a whole.


So now that the big dogs of numerical logic have been properly kenneled, it is time for me to take Mia for a walk as I enjoy my potato chips. "Ruff! Ruff!"