SDC interviewed Claude Guay – CEO of iPerceptions – and Duff Anderson, iPerceptions’ VP of research – about their approach to gathering essential data about customers. What data is most important, and what is most reliable? iPerceptions provides customizable survey tools for businesses ranging from small to enterprise level.
SDC interviewed Claude Guay – CEO of iPerceptions – and Duff Anderson, iPerceptions’ VP of research – about their approach to gathering essential data about customers. What data is most important, and what is most reliable? iPerceptions provides customizable survey tools for businesses ranging from small to enterprise level.
SDC: How do you select the questions for your voice of the customer surveys?
Claude: We’ve been doing this for more than 10 years so the surveys and questions have evolved to what they are currently today. We tend to look at the research in 4 different areas, then we have questions for these 4 different areas.
The first aspect of the research is segmentation. We first ask questions to identify the segment of the respondent, and it depends on which site we’re running the research from. For example if we’re running on a hotel site, we will ask the visitor if he’s a business visitor or a leisure visitor; we will ask them the purpose or the intent of their visit; was it to book, or for research, or was it for customer service; we will ask them if they were able to succeed in the purpose of their visit; we can ask them demographic questions; we will ask an overall satisfaction question as it relates to their website experience; and that forms the first aspect which allows us to segment the client.
The second aspect is a framework – a methodology we’ve developed – around what we call the “iPerceptions satisfaction index.” We have a series of attribute questions related to navigation, content, depth – around the online experience. So, they’re very specific questions that allow us to better determine the makeup of the online experience. In that section we ask them, on a zero to 10 point scale, so you have 11 choices from zero to 10 to make the rating. We put descriptors – Good, Very Good – between the points because we’ve found over the years that it gives us normal distribution, which is something your readers would appreciate because different cultures will respond differently to, “Is Good a 7 or 6?” If 5 is in the middle, some cultures will tend to choose that all the time.
SDC: Why is the normal distribution important?
Claude: We actually run modeling on this data and the modeling we will do is a Baysian averaging model, which allows us to use those attributes to predict an outcome. So, for example, the outcome is “likelihood to return to the site,” or “likelihood to purchase,” or “likelihood to refer,” we can tell you that if you improve content by X percent, or if you improve navigation by X percent, we can actually run linear regression that will predict how much we’re able to move the outcome that you’re trying to get. So, for example, if you’re trying to get an improvement of 4% in the likelihood to refer the site, we can actually use the model to see which attribute will most impact this metric that you’re trying to change. It’s an outcome metric.
The third part of the survey – we call it the “business” part – is very specific to our client. Let’s say that we’re trying to investigate navigation for a business traveler. We will ask specific questions of business travelers who rated navigation less than 5. It could be a series of specific questions on a future product, or on an impression of the brand. Or it could be that we identify a specific demographic and want to find out which other sites they use. So that’s very much a custom part of the survey that we work on with our clients.
The fourth part of the survey is open-ended text. We ask, normally, three questions: What did you most like about your experience? What did you dislike the most? What are your suggestions for improvements?
So those 4 areas, and the way I just asked the questions. If a client asks us, “What are the best of breed ways of running a survey in a Voice of the Customer?”, this is our answer to them. We can customize these, but if you ask us for best practices after 10 years – and Duff is responsible for a lot of that research – this is what we’ve come up with.
SDC: This survey is contained in the product you call the webValidator. Do your clients tend to use a combination of your different VOC products?
Claude: Most of them, in terms of absolute numbers, use the 4Q product, which is four basic questions that are used by more than 8,500 websites around the world. We have about one hundred clients that use the enterprise webValidator – big brands like Dell, Lenovo, Fairmont, Ford, Mercedes – that’s what they use.
The free product that thousands of websites are using presents four basic questions: Rate the overall website experience, what was the purpose of your visit, did you manage to accomplish the main purpose of your visit – which we call the task completion question – and that’s a simple Yes or No – and the fourth one is an open-ended question based on if you said No to the task completion question, we ask Why not?
SDC: You serve the travel and hospitality industry. Give me an example of how a small client would deploy your survey tool.
Claude: Say I came to a small bed and breakfast. First question is, “What was your overall experience?” Let’s say it was a 7. Then we ask, “Did you come to book a room, research rates, make a reservation, change or cancel?” They pick one of those as the main purpose of the visit. Then we ask if they were able to do it and they say Yes or No. And if they answer No, then we ask why. If they came to book a room and they did not book a room, why not? If they answer Yes, then we ask what was most positive about their experience.
Because we’re doing a random sample of all visitors, we want both the positive and the negative because we want a balanced view of customer experience on the website.
Your readers do a lot of quantitative research using Google Analytics and Web Trends and all of these tools. They get all of this quantitative information but they are lacking the qualitative information. They don’t know why the visitor came. They don’t know how they felt about their experience. They could conclude that the visitor had a good experience because he purchased something, but in reality that visitor was very displeased and he is not likely to return or to refer the site to somebody else.
SDC: Is customer behavior on the site an indicator of satisfaction?
Claude: To be very specific, we do not track their behavior. We ask why they behave as they do. There are plenty of tools that track behavior. But there are not a of tools that ask questions to understand the “why” of behavior. We call this field Attitudinal as opposed to Behavioral. It’s all about the mindset of the visitor.
Duff: It’s important to recognize that within VOC there are many different solutions. Many things that are simply feedback get labeled as VOC. What iPerceptions is trying to do is to bring in a rigorous method so the sample you get is representative of the population as a whole. It’s not a feedback card; it’s not there for anybody to trigger any time they want. It’s actually a random solicitation that’s fully branded to the experience on arrival, that those who are presented with the opportunity can complete at the end of their visit.
It’s all about intent. The only reason you do research like this is that the perspective from somebody in a real situation is different from what you can create in focus groups, or what you get from feedback mechanisms where you typically get problems, and gives you a more strategic view, sort of turning the activity that you have on your site into an audience that has an intent, that has actually successful or unsuccessful experiences with the interactions with your brand.
As it comes to the social media world, it’s really the beginning of social media where people have real needs, companies offer real solutions, and we’re offering in a structured way a communication there that can help quantify that for the organization. It lends to the ability for organizations to move toward analyzing other forms of information, whether it be their behavioral analytics or even their social media metrics.
The VOC solution, as we define it, provides an extremely strong signal in the sense that there is a lot of context to that information. The person had self-identified their purpose, self-identified whether or not they were able to complete it, and as they provide through open ended text, it’s specifically around that context.
You can gather building blocks. You can build libraries to better analyze what we’ll call “noisy feeds” in other social media contexts. We have a very important place in the larger social media world, which is, “Where does the strong signal come from?” And, “How can you move forward to noisier media where you have a good understanding of what’s actually happening in your principal space.”
SDC: How does Twitter or Facebook figure into the voice of the customer?
Duff: Where it would be immediately useful would be – you’re seeing trends within your VOC feedback. Are they appearing within the social media field. Right now our tool doesn’t incorporate it directly, but certainly clients are using it indirectly.
Claude: An example of how the clients are using our tool is they’re actually asking their online population, if they use Facebook, why do they use it? Same thing with Twitter. How often are they on Twitter? So they’re actually doing research on their online population to figure out the usage of the social media.
Remember I talked about those custom questions we put in the third module of our solution? Many of our clients are using that module to put in questions around social media usage to their online visitors.
Duff: For instance, in the hospitality sector we’ve looked at when people use social media sites at different stages in the purchase consideration process. A lot of people have gone directly to quantifying social media buzz. As we all know, it can be very misleading. VOC allows you, in a more structured and controlled environment, to start understanding the social media habits of your actual interested visitors, informing your social media strategy in a large way. I do believe the direct contact full loop will come as well, but at this point it’s more in line with where more organizations are as far as jumping ahead and being reactive to buzz, which once in a while is interesting but usually is misleading.
SDC: Aren’t customers are getting more sophisticated in understanding how their voices are heard by the people that are selling to them?
Duff: I believe organizations are attempting to structure in a way to support that…
Claude: …but it’s not quite there yet. The whole idea of measuring the buzz and sentiment analysis is quite a tricky proposition. You can do sentiment analysis, but how do you measure how good or correct you are? I mean we know how difficult it is to analyze open ended text, even when you have a context. So in the world of social media tools, you have to first define the context and then do the sentiment analysis. That’s a big challenge in itself.
Duff: What’s the relationship between the influencer and your actual visitor base? And is there any at all, right?
Claude: So we see that social media is here to stay. Right now we’re using our tools to gather complementary information and validating information and text information. We’ll see how we’re going to evolve in terms of integrating the social media information.
Duff: I think when you jump directly to social media without having VOC in place you’re missing a serious foundation in how to understand social media in the first place.
Claude: You may be reacting without having the context of your online visitors. Doing one without the other is a dangerous proposition.
SDC: You seem to be leveraging the nitty gritty information, which comes straight from the customer’s mouth…or keyboard…to reduce the amount of “garbage in.”
Claude: Duff is more politically correct; he calls it “noisy.”
Duff: We’re looking for strong signals before other analytics. Kate Niederhoffer is always talking about the importance of a strong signal, and I think we really have the right answer there. The difference is, it’s controlled, and there’s a reason that it’s representative. It’s not the same as a Feedback button, which is also interesting information. It will find you broken links; it will point to tactical issues. But it doesn’t inform at the strategic level in the same way.