Measuring Call Quality: Are you really listening?
When customers get in touch with your contact center, they expect you to listen. But listening goes further than just the agent who takes the call. Co-listening, otherwise known as quality call monitoring or transaction monitoring, helps make sure that customers are truly heard and increase customer satisfaction.
World-class customer service teams like Zappos use co-listening to get a window into the quality of their support and uncover the insights they need to make improvements.
Through co-listening, you can get a more holistic picture of agent performance beyond productivity measures, pinpoint large-scale issues, and highlight best practices from your top agents that can then be spread to the rest of the team. You can also reinforce an organization-wide commitment to excellence because people do better when they know they’re being watched.
In the end, better quality means happier customers and lower support costs. The benefits are of quality call monitoring are clear, but for many teams, the process of getting started may seem daunting and murky. Our nine-part framework breaks it down into an easy-to-follow roadmap to quality monitoring success.
Step 1: Assemble the team
A job well done starts with choosing the right tools. Or in this case, the right team. Someone needs to have ownership of your quality function, and ideally it shouldn’t be existing managers.
Existing managers need to manage. It can also be more difficult for people managers to take a completely dispassionate approach to scoring interactions because they have pre-existing biases based on their relationships with their employees. Having separate quality managers means that managers can focus on other competencies and that the monitoring process is more objective.
Quality managers need to have a certain disposition and be highly attentive to detail. In the DISC profiling system, individuals scoring high on Conscientiousness and Steadiness would be a perfect fit.
Step 2: Define clear processes
After your team has been formed, take some time to make decisions around how quality monitoring will work. In particular, you should consider and have consensus on:
- Monitoring: How will we choose calls? How many calls can we monitor per day?
- Calibrations: How can we keep improving the consistency of our ratings? How often will we assess our own accuracy?
- Quality metrics: How do we pass feedback to agents and managers? What happens when employees make different types of errors (e.g. critical vs. non-critical)?
Processes should be clearly documented before beginning your monitoring program. One great method of process documentation is to create flowcharts with a tool like Lucidchart.
Step 3: Create forms
Design quality monitoring forms to track call attributes consistently. To get the most well-rounded quality form, gather input from different stakeholders across your team, including agents and managers. Making it a collaborative process also helps ensure buy-in from everyone who will be involved.
In developing your form, you can use external benchmarks, but it’s also important to tailor it to your own support philosophy and what’s important to your team and customers. Dig into your key drivers for satisfaction and dissatisfaction. Ask your team and review customer feedback from things like your NPS, Customer Effort, and CSAT surveys.
Break these drivers down into attributes. Attributes are what will be scored, ideally in a binary manner (e.g. pass/fail). We define five different types of attributes:
- Customer Critical - Known: Things that are visible and very important to the customer, for example, providing accurate information.
- Customer Critical - Unknown: Things that are not visible to the customer, but that are important, for example, an agent inputting the correct information into an order management system.
- Business Critical: Attributes that impact other parts of the business, like revenue generation through upselling.
- Compliance Critical: This covers customer privacy, regulation, and other legal requirements.
- Non-critical: Attributes that are important, but will not necessarily ruin the interaction. These should be implemented after critical errors are at an acceptable level.
From there, break your attributes down into sub-attributes that help you determine the score. In other words, sub-attributes are the reasons that an attribute would be categorized as either pass or fail. For example, for the attribute of “efficiency,” a sub-attribute might be “failure to follow documented process.”
Then, map attributes over the different phases of a transaction. We define five phases:
- Welcome: Greeting the customer to create a good first impression
- Discovery: Uncovering the reason for the transaction
- Up And Cross Sales: Addressing any opportunities to upsell or cross-sell based on learnings from the discovery phase
- Summary: Summarizing, presenting a solution, and ensuring there are no other needs
- Closure: Confirming the customer is satisfied
Develop a workflow for all your customer touchpoints; callflow, e-mail flow etc.
Once you’ve defined your attributes, sub-attributes, and workflow phases, you can finalize your form.
Step 4: Calibrate your team
Without a well-calibrated quality team, you’ll face data integrity issues, lack of credibility within the organization, and an inability to use data to drive improvements. Needless to say, calibration is extremely important.
Calibration increases your inter-rater reliability, which is the degree of agreement among your raters. If your team is rating things differently, then either your form needs to be refined or your team needs to be retrained.
To calibrate, team members should each rate the same transaction. Then a third-party, like a quality expert or senior manager, should compare scores and highlight differences.
In addition to calibrating amongst your team, it’s also important to calibrate with your customers. If a customer’s issue isn’t resolved, then the interaction should not have passed. Make sure to regularly check scored transactions against customer feedback on those transactions.
Step 5: Train leaders and agents
Before diving into monitoring, make sure your team understands how they’ll be scored and how they can access results. Hold training sessions and provide everyone with the monitoring form. This can be done through an adaptive learning platform to increase efficiency. Show them examples of excellent and poor quality, and take the opportunity to check for organizational alignment.
Agents should understand how they’re being scored. Managers should understand how they’ll be receiving information from quality managers that they can use to coach their agents. Senior management should also work to make sure the quality team and people management teams are set up to work in harmony. Quality managers should be guided on how to effectively and tactfully pass along critical feedback, and team leaders should be primed to receive feedback gracefully.
Step 6: Start monitoring
At this point, your quality team should be ready to start listening to and rating calls, following the standardized and consistent processes you laid out in Step 2. As for how to monitor, we recommend choosing calls randomly, for example, choosing every 10th transaction or using a randomizer.
It is important to identify the required tools for an effective quality program. It is important to understand the common challenges of most quality tools in order for you to address them when deciding on a quality tool for your program. You don’t need a co-listening software or transaction monitoring software to implement our 9-step framework, but it will help you to achieve your goals and with a large amount of monitoring’s it is advised to avoid too much manual work.
Step 7: Reporting
Reporting is how your quality team will communicate results to agents and managers. It’s also how you’ll identify opportunities for improvement and strengths.
Customer critical error accuracy, business critical error accuracy and compliance critical error accuracy should be measured by unit, while non-critical error accuracy should be measured by opportunity. This is because it typically only takes one critical error for transaction to be rated as a failure, while on the other hand, it takes numerous non-critical errors for a transaction to fail.
One of our favorite ways to dig into opportunities for improvements is by creating Pareto charts that show how different sub-attributes contribute to the overall percentage of failed attributes. In the example below, 36,36% of failures to try a new sale were a result of the agent not asking an uncovering question.
Along with errors, don’t forgot to report on the triumphs. Throughout the monitoring process, you’ll see what’s working well for agents. Use these highlights to build out your best practices and reinforce your team’s strengths.
Step 8: Create action plans
Armed with the information uncovered through reporting, your team can start turning insights into action.
Start by figuring out the root cause of errors. One way to do this is using 5 Whys, a powerful exercise for drilling into exactly what’s causing things to go wrong. In a nutshell, keep asking “Why?” until you reach an answer that consists of something that you can fix, like a broken process or a behavior that can be modified.
Then work to create goals and use them in employee performance reviews to help drive improvements. The best goals are SMART:
- Specific: Make it clear and specific.
- Measurable: Make some part of the goal quantifiable.
- Achievable: Although a stretch, the goal should be reasonably within reach.
- Relevant: The goal should make sense alongside other goals.
- Timely: Set a target date for completion.
And while employee development is certainly important, equally important is making improvements in process, for example, updating confusing policies or processes, tightening up communications on updates and changes, or fixing systems and tools. Be sure to address both individual and process related errors when putting together improvement plans.
Step 9: Perform continuous analysis
Quality monitoring is a continuous improvement process. There will always be new agents, new things to learn, and more room to get better. Keep your quality efforts going to make sure your customers continue to receive top-notch support.
Want to learn more?
Download our e-book, Beyond Co-listening - 9 fasttrack steps towards a killer CMX program, for even more in-depth advice about starting a quality program.