Ask Your Target Market


Get inside the minds of your target market with AYTM's intuitive
online survey tool - the first affordable Internet survey software with a built-in consumer panel. Use the online survey creator to build your survey, define the criteria of your target market, and watch the results begin to pour in immediately.

Monday, April 18, 2011

How to Read Data


The idea of reading survey results seems simple enough. Until you do it. Even a “short” survey yields a lot of data. How can you simplify it so you can efficiently identify the most important results?
Let’s tackle this for two common situations.

1: Data from single choice questions
These questions are effective when you want people to prioritize a choice. Because each respondent can select only one answer, the responses add up to 100%, giving you an effective “rank order” of responses.  In some cases, one item clearly stands out at the top of the results—and you have a clear “winner.” But what if some of the items are close? Or if no single item really rises above the others?

The brutal reality is that there may not be a “winning” answer. Let us imagine a single choice question related to color preferences. The result may be that blue, green and purple are all within 5% of each other (ranging from 20-25% each), at the top of the list. You might have preferred to see a winner, but the data show that’s not reality among the population of interest. In this case, we have “bands”: band 1 colors are the top 3, band 2 colors are in the middle, and band 3 is at the bottom (those selected by fewer than 10%):
Blue 25%
Green 21%
Purple 20%
Red 12%
Orange 12%
Yellow 6%
Black 4%

Single choice questions don’t always yield a “winner,” and that’s ok. Using the “bands” approach can help you get to the “so what” quickly.

How do you define a band? Think about what is actionable for your topic area. For your topic, will a 10% difference in preferences cause you to make a different choice? To add a different feature? To offer a different color option? In most cases, probably not. A 20% difference is more likely to lead to action. A 30% difference? Almost certainly so. For example, if 60% of your customers like feature A and 30% like feature B, that will likely lead you to focus on feature A. If instead, that had been 40% versus 30%, the data alone would not lead you to choose feature A.

If you want to get really precise, you can import your data into a “crosstab” program and do actual significance testing. But in many cases, that can be overkill.  What most people want is to use the data to make decisions.  Not to know with precision if a 5% difference is attributable to minor variability in the data set.

2: Data from multiple-choice questions
Multiple-choice questions are great for capturing the realistic situation where more than one attitude, behavior or preference may exist. For an example, let’s imagine you have a question about favorite online news sources.  In this hypothetical case, you want to know “all that apply” because of two key hypotheses:
  •   You have a hypothesis that some customers are online news enthusiasts and visit many such sites.
  •  You have a hypothesis that some news sources attract the same people. For example, you think NYT.com and HuffingtonPost.com have a lot of overlap.


So in this case, a “check all that apply” makes sense.

But how would you interpret the data if you offered 5 news sources, and the percents add up to over 300%? That simply means many people checked most of your options, and supports hypothesis #1.
In this case, you may also want to look at some sub groups. This is the “filter by answers” option of AYTM. For example, take all of the people who selected NYT.com, and see what else they checked. Then take all of the people who selected  CNN.com and see what they selected. Comparing sub-groups can tell more of a story than looking only at the data in aggregate. Maybe the NYT subgroup also visits every other site, but the CNN group tends to focus on CNN alone? That would be fascinating!

But what if you offered 5 news source answer options, and the percents add up to over 400%, and that strikes you as a bit high? In such cases, a follow-up question could have been a safety net.  Question 1 would have been a check all that apply, “Which of the following news sources do you visit at least once a week?” And then question 2 could have been a single choice follow up, “Which of the following sources have you visited most recently?”  It gives you the best of both worlds.

In Either Case
In the case of single or multiple-choice questions, be prepared for unexpected results. Percents that are higher than you expected, or lower. And be open-minded.  The data may not tell the story you were hoping for, but it will tell a story. 

2 comments:

  1. Unique blog post. Your analysis is quite nice and also convincing. I think you should do more research regarding the subject. Blog is fully unique and excellent information,

    get a business dissertation example

    ReplyDelete