How to “Quantify” Feedback and Avoid Tunnel Vision
Customer-Centricity | Industry Savvy

How to “Quantify” Feedback and Avoid Tunnel Vision

on / by Alex Birkett

While teams that actively collect user feedback are ahead of the game, some want to turn art into science by quantifying the user feedback they’ve collected. 

Quantitative tests are great, no doubt, but they are limited. It‘s crucial to ‘quantify’ correctly and recognize potential pitfalls.

This post will walk through a process for quantifying user feedback as well as some things you should do to avoid tunnel vision as you move forth in your feedback collection.

The Typical Feedback Collection Process

What’s the typical feedback process like? Normally, a company collects customer feedback at set time intervals, ad-hoc, or based on specific interactions or behaviors.

Each of these feedback points can give you different insights and answers. For instance, ad-hoc feedback, the kind your visitor actively gives you at their discretion, will often result in CX frustrations and product considerations.

Releasing feedback requests at specific and spaced out points in time, however, often results in less user frustration and can give you interesting insight into customer satisfaction over time. For instance, Adobe Photoshop asks you every few months for your NPS score:

Post-purchase and behavioral-based triggers can give you acute feedback on specific processes and actions. With any conversion optimization client, at least in the ecommerce space, try to set up a post-purchase survey to gather some immediate thoughts, reactions and ideas for improvement.

All of these points are important, but all are slightly different in their methodology and the wisdom they seek to uncover. Some of it is straight qualitative. For instance, if you’re running an on-site poll with visitors who haven’t signed up yet, you may only be interested in Voice of Customer insights so you can better align your landing page messaging. In this case, quantifying feedback may not be a major concern.

However, you may want to bucket responses to a few distinct clusters in order to build a personalization strategy into your website (yes this is possible). By quantifying and bucketing issues into different categories, you can prioritize issues to focus on and weigh out their relative importance to users. When comparing quantitative and qualitative feedback, Jeff Sauro, Founding Principal of MeasuringU explains,

“It doesn’t cost more money to quantify or use statistics. It just takes some training and confidence–like any method or skill.”

So, it’s not impossible, and it’s likely easier to add in some quantitative components to your user research than you’d imagine, even using your current feedback collection system.

How to Quantify User Feedback

There are two primary ways to quantify user feedback:

  • Using a quantitative response-scale question
  • Afterwards, categorizing or analyzing your open-ended data

1. Asking Response-Scale Questions

The first way is much easier and more straightforward. In fact, it’s one of the primary mechanisms by which we capture both market research and user research today: the survey. Response-scale questions are great, no doubt, but they are simply a good first step on the way to fully understanding the ‘why’ behind the feedback. 

You’ve absolutely seen a quantitative user feedback survey question. Usually, it looks something like this:

This particular survey example is a Net Promoter Score (NPS) survey. NPS is calculated by subtracting the percentage of detractors (customers who would not recommend you) from the percentage of promoters (customers who would recommend you).

Pascal van Opzeeland, CMO of Userlike, describes the benefits of using NPS here:

“It cuts down to the question of whether the product is good enough to put your own reputation on the line.”

Another example is a simple Likert scale attached to a customer satisfaction related question. Something like this:

In this case, you may just average the responses and measure that across time and among cohorts. This allows you to find any anomalies and to course correct if your customer satisfaction drops due to a given product change or experience.

2. Codify Open-Ended Questions

The other way to quantify user feedback is by codifying open-ended responses into categorical buckets.

Say you use an online form builder to capture some generic visitor feedback:

You’re going to receive a variety of responses, ranging from praising comments about your wonderful customer support rep, Jim, to extremely angry rants about your frustrating checkout process.

Similarly, a common feedback collection mechanism is a live chat tool. You can gain a lot of insight by looking back at naturally occurring chats on your site. But, again, you’ll likely have a lot of variance in responses, and it’s hard to sift through it all manually. Wouldn’t it be nice to bucket these and quantify their occurance?

Thinking like a conversion optimization pro – in almost all instances of user feedback, try to bucket responses. This allows you to prioritize usability issues, build distinct personas and user intent classes, or analyze the sentiment of an experience.

An example given on an older CXL blog post talks about a hypothetical WordPress theme shop. You read through response data and see you have an initial set of persona clusters.

  • A blogger who is starting out and looking for his/her first WP theme (Blogger)
  • Professional web designers purchasing themes for clients website (Designer)
  • Small business looking to upgrade their current site (Small-Biz)

 

Though the survey didn’t explicitly ask for this information, you gleaned it through a question about the visitor’s intent. Then, classifying and labeling this information helps you store it and analyze the other responses in relation to this label. You can start with a basic spreadsheet:

If you’re running on a small scale, you can usually do this manually. Especially when you’re only categorizing one or a few questions, it’s not too difficult to do this. 

Jeff Sauro, Founding Principal at MeasuringU, outlines a process for doing so:

If a user provides a low rating (below a 5), ask them to briefly explain why they gave a low rating. Take these open-ended comments, categorize them and add up the frequency in each group. This process can help you and your stakeholders make more informed decisions about the likely causes of the trouble.”

Here’s an example image he gave from a recent usability test:

This, however, becomes tedious at scale. It’s then that you want to do one of two things. You can either allow visitors to manually tag feedback, such as the feature Usabilla offers:

Or, you can run natural language processing (NLP) and try to use machine learning to predict which category the response belongs to.

Stefan Debois, Founder and CEO of Survey Anyplace, summarizes his experience with NLP:

In our experience, the software did a great job at categorizing text-based customer feedback into sentiment categories – for English texts. For other languages, we found that they were either not supported or the quality of the results was not satisfactory.”

This is something that is increasingly available but in many cases, can be either pricey or complicated. Though Wootric seems to have one of the better AI-powered qualitative analysis systems on the market.  They use their massive data set of survey responses to automatically cluster and classify feedback into distinct categories. Pretty neat stuff.

Of course, if you’re technically savvy, this type of analysis isn’t out of reach to complete on your own. You can even use a tool like Aylien to help you get it done. They even offer recipes to help you analyze social media and PR sentiment.

Potential Limitations of Quantifying Feedback

Following quantitative feedback can open up a few interesting possibilities, including robust tracking over time and more objective decision making, but there are a few points to consider. 

Relying on imperfect data

Drawbacks are bound to surface when you lean too heavily on the perfect accuracy of the data gathered from quantifying user feedback. No data is perfect and the quality of the response depends on the quality of the question.

Ask the wrong question, or the right question at the wrong time or to the wrong person, and that skews your data. The fact that the data then becomes quantified can hide this inaccuracy.

Unclear data values

It’s also unclear what the value of a quantified user feedback data point means. If you’re taking an NPS survey, what’s the emotional distance between a 7 and an 8? Quantitative feedback is thus best used in context with other forms of data collection, from your typical web and behavioral analytics to your open-ended qualitative user insights.

Focusing on the wrong things

Furthermore, when you quantify metrics like this, you can often place undue importance and objectivity on them. Like any data point, it needs to be taken in context. When you have an objective, numerical data point, it’s sometimes too easy to be led astray by focusing on the wrong thing.

Conclusion

There are limitations in most forms of data collection. No data will ever be perfectly reflective of the truth, so as long as you know that, approach data critically, and use it to guide decisions that you later analyze, it’s hard to go wrong by quantifying some of your user feedback.

Quantifying qualitative user feedback can overall help you do a few things:

  • Bring objectivity to decision making
  • Track trends over time
  • Segment responses by quantitative variables

 

However, doing so also suffers from limitations and pitfalls, like:

  • Relying on imperfect data
  • Unclear value denominations
  • Obfuscation of nuance

 

It’s probably worth quantifying at least part of your user experience feedback, whether you do that with a popular response-scale survey like NPS, or whether you analyze your qualitative responses and bucket them into distinct categories.

The best approach to feedback seems to be to take the best of both worlds, using both quantitative and qualitative insights to make decisions.

 

| | |
Article by

Alex Birkett

Alex Birkett is a Growth Marketing Manager at HubSpot. He lives in Austin, Texas, but spends most of his time traveling. Other than growth, data, and conversion optimization, Alex enjoys SUP yoga and a good karaoke session.

Share your thoughts

Pin It on Pinterest