It’s been a few months since our previous post about voice of the customer programs being inadequate indicators of customer experience in general and customer satisfaction in particular. Recently we came up with some additional data about this topic that we thought would be important to share with our readers.
It’s been a few months since our previous post about voice of the customer programs being inadequate indicators of customer experience in general and customer satisfaction in particular. Recently we came up with some additional data about this topic that we thought would be important to share with our readers.
During a recent analysis for one of our large customers who is adamant on using VoC programs to gauge customer satisfaction we found some amazing findings. The first thing we noticed was that their post-call surveys were getting a paltry 1/4% completion rate. Yes, you read that right: one quarter of one percent of customers presented with the survey actually completed it out of millions of monthly calls. How is that a significant representation of the entire customer base? How much money has been spent trying to gather this utterly useless information?
Even though this was not part of our engagement we decided to step in and take charge of the situation. We took the survey data and we did two things with it:
- We looked at the entire experience leading to the survey. Not just the last touch, which is what VoC programs usually measure, but rather every interaction across all touchpoints that led to this final event. Even though the survey questions were geared toward the agent-to-customer interaction, we wanted to see if there were any upstream events that lead to this interaction in the first place and we found a lot of breakdowns in the complete experience. The agents in the call center were not to blame for the poor satisfaction scores because they had been fully optimized. They knew the scripts and products and could answer any question. But the interactions leading to the agent experience were usually the culprit.
- We took the experience journeys of the few customers who did respond to the survey and matched them with the entire customer base of tens of millions of other customers. And guess what we found? There were a lot more customers with the same kinds of experiences who did not fill out surveys. But they were heading down the same path as some of the unsatisfied customers who did fill out the surveys. We even went a step further and analyzed historical data to see how many of these customers had churned and found that many of them did.
Presenting these findings to our customer got us the attention of senior executives. For a company that is focused on customer experience, they were completely blinded by the VoC hoax. The promise of being able to better understand customers by bombarding them with feedback buttons, emails and post call surveys was not showing them the true picture. Our findings caused some internal friction because all the money spent on these technologies and efforts in the past had not generated valuable results. But everyone quickly understood that the only way to truly gain value from VoC programs was to look at all the data from all interactions. And the industry is starting to understand this as well. Just look at the M&A activity around EFM (Enterprise Feedback Management) in the past few months (which isn’t really going to solve the problem).
How are you using VoC to understand the complete customer experience? We’d love to hear about it in the comments.