Ever been at a friend’s for dinner and complained the food was under-cooked or over-salted?
Of course not. People are polite. And while makes for pleasant dinner conversation, it also makes for misleading research results.
“Politeness” is just one of the factors that can lead to weak survey data. So when we are writing surveys, we need to do everything possible to get honest, objective data—even if it’s “bad news.”
Ask For It
In the survey invitation text and opening screen, tell the survey participant that you value their honest options. Let them know you want candid feedback. And remind them that their answers are anonymous (to imply that even if they say negative things about your brand or products, there will be no repercussions).
Here is an example of such text, as written for a magazine publisher’s survey:
“We value your opinion, and would like your candid feedback on our recent redesign. Your responses will be kept strictly anonymous and will only be used in aggregate with that of other subscribers.”
Avoid Over-reliance on Agree Scales
A common approach is to craft a series of statements followed by a scale like this one:
- Strongly Disagree
- Strongly Agree
While this is certainly fine in some cases, it can also be a crutch. And it can lead to overly polite responses.
Consider a case where we want to measure behaviors from people who have recently purchased laptop computers. We might ask:
“Please indicate your agreement with the following statements. Considering your most recent purchase of a laptop computer:
· I asked friends or family members for advice
· I researched online before making a purchase
· The brand of laptop was an important selection criteria to me.
· The processor speed was an important selection criteria to me.
For this example, the agreement scale would be displayed for each item, or in a grid-style display.
Research suggests that people will be inclined to agree with such statements more than is actually true. One can debate if this is due to politeness or convenience, but it doesn’t really matter: we know it happens. So while it can be fine to use some agreement scale questions, don’t over use them. Instead consider “Importance” scales or other options. [If you want more information bout this specific issue, read this recent article from Vovici’s Jeffrey Henning: http://blog.vovici.com/blog/bid/21978/Acquiescence-Bias-Agree-Disagree-Scale-Best-Practices]
Avoid Leading Questions
It’s the classic husband’s dilemma: the wife twirls in a new outfit and then asks, “Does this make me look fat?” He knows what answer she expects, and he’s going to give it to her.
Sometimes survey questions are also leading. Maybe not quite so blatantly, but still.
Which of these options do you think is best?
Option A: “Do you think our new web store is easier to use than our previous one?”, followed by an agreement scale.
Option B: “How does our new web store compare to our previous version?”
§ It’s easier to use
§ The ease of use is the same
§ It’s more difficult to use
§ No opinion
The correct answer is that Option B is less leading. Not just because it doesn’t use an agreement scale, but also because the question itself is more objective.
Randomize Answer Options
Often in surveys we have a list of answer options. Here’s a simple example:
Which of the following types of stores have you shopped in during the past 7 days?
§ “Dollar” store
§ Clothing (Women’s)
§ Clothing (Men’s)
§ None of these
If every survey taker sees this list in the same order, there is a risk that the responses will skew to the items at the top. Unfortunately, survey takers tend to focus more on the top of a list than on the bottom. What’s the solution? Randomizing the list. Of course, always anchor logical choices (like, "None of the above”) at the bottom.
Honest Data = Good Data
Our goal is to make it easy and comfortable for survey participants to give us honest, candid answers. Anything we can do to make that happen will help ensure we get quality data from our research surveys.