It’s no secret that the entire marketing research industry is concerned about data quality.
Since the introduction of online survey methodologies in the early 2000s, sample and other research providers have struggled to protect clients from bad respondents, who can ruin the quality of the data clients receive. These issues have become even more appaling over the past few years and the trajectory is scary. As the saying goes, "Bad people give you bad data." And if that is not a saying, it should be!
Marketing researchers should place questions in the screener and survey to help identify (and
subsequently remove) low-quality respondents. Red herring questions, choosing too many items in a
multiple-choice question and eliminating respondents who complete the survey too quickly will 'trap'; users who are either not paying attention, speeding through the survey or who do not fit the respondent profile. These techniques are common attempts to protect data quality on the survey side.
Sample providers also routinely use technology-based precautions:
Checking the respondent's digital fingerprint can eliminate some offensive respondents.
Tracking and monitoring respondents' digital fingerprints allow sample companies to ensure
that each respondent can only submit one survey, preventing multiple submissions from the
same machine, which is even more important when researchers use multiple providers to fill a
Bot prevention systems identify automated respondents.
Fraud prevention systems that include verification of IP addresses and time zones, assigning
fraud scores based upon data points and technology that evaluates the respondent.
These methods are effective and can result in removing a good percentage of low-quality respondents who may have taken your screener or survey.
A Good Start, But Not Sufficient
The problem is that many respondents are onto these methods –and they are usually one step ahead,
which is why updates are constantly being made to the fraud checking systems our industry uses. To
truly improve data quality, we need to take it a step further. We need to add humans to the data
Symmetric adds an open-ended question to each survey that asks each respondent to answer details
about the survey they just completed. We then have our data quality team review every response and
decide to keep or eliminate that respondent. If the respondent takes their time and reads and answers
the questions in your screener and survey thoughtfully, they can provide an acceptable open-ended
response. However, if they have not been reading carefully, they cannot give an adequately detailed and
specific response. We remove those respondents, adjust the quotas, and collect better data from
engaged respondents. We have tested using text analytics software to do this review, and compared to
using the human element, we have found people are still superior to machines for this task.
Using a human being to review an open-end response identifies another 5% - 10% or more of bad
respondents that were able to get through after digital fingerprint, red herrings and fraud detection
software is used. Additionally, this approach adds minimally to the cost and delivery time of the study.
Most researchers find this approach extremely appealing, even when using samples drawn from their
If you are concerned about sample quality impacting your research results - and who isn't? - consider
using this method. You owe it to yourself... we promise you’ll like the results!
Concerned about data quality? Add the Human Touch! Contact Symmetric today!