Four Ways to Win the B2B Data Quality War

June 29, 2023

Market researchers have always had concerns about data quality, especially for B2B research, because many titles/positions are challenging to find, and the incentives are higher. However, the industry really started to notice the issue around 2015 when the problem started worsening.

Given the level of concern and the critical need for high-quality data, is it getting better? Mostly yes, because – on the surface - most in our industry are now fully aware of the problem and working to fix it, with technology leading the way in identifying bad respondents for removal.

On the other hand, some data quality is worsening because others just accept the situation. This may be more a matter of price sensitivity, the pain/risk of bad respondents, and data quality not being poor enough to force behavioral change. After all, you either pay more for higher-quality, better-managed samples on the front end, or you pay for it on the back end in the time your team spends cleaning the data, identifying bad responses, and negotiating to replace respondents! And, of course, the cheaters’ job is to stay one step ahead of the providers, so the problem continues.

Market researchers don’t have to accept poor-quality respondents and, frankly, should not accept it. A lousy sample leads to invalid results and additional costs of going to multiple panel providers to replace invalid respondents. And that does not even consider the effort and time you put into calming your irritated end clients because you can’t deliver on time.

We can win the war to improve B2B data quality by improving B2B samples. These are the battles that we need to fight:

Battle #1: Choose a Better B2B sample source. Start by using a proven and experienced sample source. Today, anyone and their brother can create a panel (and they do). So, choose a panel provider with an established and reputable panel built specifically for B2B research. Many consumer panels will query their database to attempt to fill B2B samples, which simply does not result in a high-quality outcome.

Additionally, consider hybrid approaches to supplement your online approach. Hybrid approaches such as side-by-side online and phone interviews (CATI) can effectively improve respondent quality without a significant increase in cost because the interviewers can enter the information directly into your online survey. CATI plus mail surveys can also work, depending on the audience and topic.

Battle #2: Technology. While the industry has been using technology to improve respondent quality, much more needs to be done. Digital fingerprinting and fraud prevention technology, such as bot identification, are regularly used. IP address checks with time zone checks to ensure the respondent is located in the right geographic area for the survey are also very helpful, as are checking for duplicate respondents in partner samples and reviewing fraud scores for your providers. Tracking the respondent’s survey history from provider to provider can identify professional respondents. To be sure, additional technology will be developed in response to fraud, and we also welcome forward-looking technology to stay ahead of cheating and poor-quality data and to attract a new generation of respondents willing to do research.

Battle #3: Broad Human Review. Technology is great but will never be more than a good start unless providers use human beings to review the quality of the data in surveys, especially in open-end and other-specify responses. Human beings should examine the results of the technology checks as well. In short, your provider should review and clean responses (and clean out bad respondents) before you (or your end client) ever see the data. Only work with sample providers with a dedicated QA department to ensure high-quality responses and the best possible data. Will it cost more? Yes. But can you afford the alternative?

Battle #4: B2B Screeners and Questionnaires. Finally, we all need to do our part in improving sample quality by including all the possible traps we can in our screeners and questionnaires. In screeners, for example, have a question to identify bots, such as a math question, CAPTCHA, or selecting a specific item from a list. Include open-end questions to evaluate the respondent’s level of expertise and familiarity with your topic (and to avoid nonsensical answers and gibberish - “fsdslk”). Put a multiple- choice question in your B2B screener to see if they select too many options or combinations of answers that would be impossible. ‘Red herring’ questions have a few fake response options or options that are completely unlikely that no quality respondent would select and can also be used to identify bad respondents who must not take the survey.

In your questionnaire, you can use all of the above question types, but you can also check for poor- quality respondents in other ways. Set a minimum response time for the entire survey or specific matrix questions to identify respondents who are completing the survey too quickly to provide quality data. Again, structure the questions carefully on matrix-type questions so it would be impossible to give the same answer for all the options. You can quickly identify “straightliners” and remove them from your data. Make sure each respondent’s answers are internally realistic and consistent. If not, disqualify and replace them.

Winning the War. Above all, don’t forget that your sample provider is key to sample quality – especially for B2B research. You want a partner who’s proactively working to address the problem of sample quality by investing in new technology, more human review, an engaged, long-tenured panel, and who’s collaborative throughout your research project. Honest and open communication between the provider and client is critical to enhancing sample quality. In short, look for partners that offer the highest-quality sample, systems, and support.

Want better B2B data? Get the best dedicated B2B panel partner! Contact Symmetric today CLICK HERE.