Asking Effective Questions: Four Tips to Getting the Right Answers
It follows logic that asking bad questions will often lead to low-quality answers. And although we may have had teachers growing up who’d say, ‘there are no bad questions,’ there definitely are—at least when it comes to surveys. To avoid the trap of bad questions and bad answers, here are four tips to help you get the most out of your surveys.
Understand what you want to know
So you ran a survey and now the customer feedback and insights are coming in. However, the problem that you and a lot of companies likely run into is that the information doesn’t adequately address the problem you are trying to solve. The reason for this is that you did not have a specific enough goal in mind as you began to write the survey.
If you don’t understand what exactly you want to know, it will be difficult to extract the right information from your visitors. And unfortunately, with surveys, you aren’t there to explain and break down the meaning of your questions as people provide their answers. Interpretation solely depends on the user’s analysis of the questions, which makes it crucial that you ask your questions correctly.
Develop a thesis to understand what you want to know
Consider this example; the goal—‘to make sure that online channels deliver 50% of all sales by the end of the year’. This may sound quite specific, but there are a few problems—namely that at this point, we can’t measure what will have happened by the end of the year. ‘Sales channels’ is also vague; what precisely is considered a ‘sale’? To improve your survey, develop a thesis based on something more specific and tailored to your company’s goal.
If you’re monitoring a new brand’s recognition, for instance, looking at visitor numbers is key to achieving that. Develop a theory on how customers have found your website, which will include other channels like offline banners or social media. Test this idea by asking visitors how they found you, and make it actionable by adjusting the way you get in contact with these customers.
Say shopping cart abandonment is an issue. Start figuring out why this is the case. Are you charging high shipping costs? Lacking an option for customers to bookmark or save certain items? Are technical difficulties pushing them off the site? Or are they ordering a group-product that they need to discuss, like a holiday trip for 10 people? You can set up a campaign check to brainstorm scenarios and anticipate what could go wrong.
After gathering insights, continue to experiment and fine-tune your thesis until the problem is resolved and you have a better understanding of what the customer is experiencing on your site.
Make sure questions are not ambiguous
We’re all unique and interpret information within our own frame of reference. That makes asking questions that are clear to everyone a challenge. The good news: it can be done.
First, skip leading questions. Anything containing an emotion or judgment might be leading. ‘Did you enjoy,’ ‘What was your issue,’ ‘Why didn’t you like,’ ‘What’s the best,’ etc., are all leading questions.
Next, be careful including facts in questions as they might be leading as well. Imagine an innovative car company, wanting to know if its new car feature is popular amongst its website visitors. The company asks, ‘What do you think of our innovative car feature?’. It may be correct that the feature is innovative or new in a linguistic sense, but since innovation is a word with a broader meaning than just new, it’s understood as cool or exciting. Therefore it’s a leading question. It would be better to ask, ‘What do you think of car feature <NAME>?’. And even then, visitors may give a variation of answers. For instance, ‘I love that I can now talk to my car’ (feature oriented), versus, ‘It makes me remember why I choose company X in the first place’ (brand experience oriented).
Overall, be critical of the language you’re using. When drafting, gather as much feedback from colleagues and listen well to their responses. Do they all give you roughly the same answers? Are people much more positive or negative when answering one of two variants of the question? Try not to give any explanation when you take them through the survey and adjust based on feedback.
Finally, keep questions clear by asking one at a time. Consider ‘Did you enjoy your shopping experience and the items we offer?’ and note that this question is leading, ‘did you enjoy’. The items on offer are also only part of the shopping experience. Ease of use, search and filtering, and visual appeal, all make up the entire shopping experience. Think through each aspect of your questions before finalizing your survey.
Consider length and ask permission the right way
We all have that sassy colleague who stops by and says, ‘Hey, can I ask you a question about X?’. The sassy colleague answer is ‘No,’ of course.
Similarly, ‘Can we ask you a question?’ in a survey runs a high risk of being answered with a quick close of the screen. With the same effort, your visitor might have said Yes, No, Maybe, smiley-face or frowny-face. It’s better to instead ask, ‘Do you like our new website?’ and if you want to thank them for their feedback, add a sentence beneath the question or thank them for their feedback after submitting. Using this foot in the door technique might feel a bit rude offline, but online, where short attention spans dominate, your visitors will appreciate it.
Of course, there are exceptions. As a rule of thumb, when you ask more than 3 questions, ask if they have time to answer the number of questions.
In addition, be critical of including information that’s irrelevant or unclear to the visitor. Like ‘What do you think of this part of the website?’. The user won’t know what ‘part’ of the website you’re referencing. Visitors usually don’t have the same sitemap or overview in their head as you do. Instead, try asking what they think of the overall feel of the site and for specific pages, make it clear their feedback is on that specific page.
Finally, examine the answers
Have a set of great questions? Perfect, though don’t forget about the format and the wording of the answer selections you provide.
For example, a 5-point mood rating in the form of smileys can or cannot equal a 5-point star rating. In the backend, the neutral smiley (in the middle) might equal the 3rd star. But if the value of the stars accumulate, while the smileys offer a binary choice, the value of the answers is not comparable.
The same goes for visualization. Radio bullets feel more definitive to a user as you must select one definite option compared to a slider bar that represents a scale. The backend might process the answers equally, but a slider gives more of a ‘gradient’ feeling than an enclosed answer option in the form of a radio button. It might provoke a different response, influencing the outcome of your survey.
Can you prevent that? No, and you don’t have to. Just be aware that not choosing one option may automatically mean choosing another. Have good reasons to choose certain survey setups, words, visualization and even the order of the questions as they all influence your outcomes. Try to get a complete and nuanced insight into your user’s motivations, not an educated guess.
Remember to state clearly what you want to achieve with your user feedback. Try to test a hypothesis and formulate the right questions. Check if the questions aren’t leading, the wording doesn’t unintentionally influence the outcome and if the answers colleagues give make sense. Need more inspiration? Check out what questions you can ask to get the most out of your surveys.