Getting Accuracy from Survey Respondents
If you’re like the rest of us in the research community, you have noticed and experienced a substantial degradation in the quality of respondents online, as well as in-person research. This difficulty has required us researchers to create much more rigorous and effective screeners to eliminate the bad from the good. This burgeoning issue is the result of a couple of key influences. One, the economic environment is causing people to try everything they can to qualify for research studies whether they are qualified or not. The other is a result of the oversaturation of surveys being requested. You can’t go out to dinner, get a cup of coffee, take your car in for maintenance and repair or, and this is my favorite, land on a website for three seconds, without being asked to take a survey.
What’s a harried researcher to do? Over the past several years, we have developed robust criteria for our qualitative and quantitative sessions where consumers come to a facility. We instituted stronger screening questions that have to be followed up by ‘prove it’ digital images being sent in. We have seen an extremely high rate of non-qualifiers eliminated once they were requested to provide digital images. It hasn’t been easy and has been an evolving process. We had to deal with people going to a store or designer showroom and taking pictures of a product and submitting it like it is their own. Those have been both extremely easy and extremely difficult to catch. In the former, you could tell the fruit and vegetables in the refrigerator they supposedly owned were plastic! In the latter, it’s difficult to tell from an image that the respondent took the picture at his or her sister’s house. We caught her deception when she was rescreened at the facility. That led us to requiring respondents to be in the digital image in their home next to their product of interest. While there still are deceivers, we have found this approach has delivered much higher quality in our respondents. Just the simple fact that we are checking up on them has eliminated those who have told us “I didn’t think you would verify this”.
Now for the pesky large sample online respondents and how to ferret out the frauds. Unfortunately, that is still a developing process. No one method has generated great netting of phony results from respondents. Some of the issues stem from survey instrument design and some from misinterpretation of the survey questions. Those two issues are difficult to untangle. What has become extremely important in our quest for accurate answers is to go beyond the simple capture techniques for speeders and straight liners by creating questions later in the surveys to confirm earlier answers. Those have to be written in a way that do not insult the respondent but appear to simply confirm an earlier response. This approach can lead to double errors and the issue of what to do with a respondent who does not answer consistently.
That is why we have moved away from using surveys that require online respondents that have to meet very stringent criteria. Online surveys are still extremely useful for ‘what-if’ situations like concept tests or what would you choose the next time you are shopping or considering purchasing.
We recommend staying away from using large online, non-customer, panels for the following types of studies due to high rates of fraudulent answers.
· “What did you do when you shopped or purchased?”
· “What did the salesperson tell you?”,
· “How much did you pay?”
· “What brand did you buy?
· “Where did you shop?”
Our latest initiatives are exploring approaches using behavioral economics to weed out those who want to deceive us. But, that is for another post.