While exploring the Twittersphere recently, I clicked on a political survey out of curiosity. Surprisingly, the question wasn’t problematic. This time, it was the answer choices.
1. Yes. I want to protect America.
Like so many political surveys nowadays, this one feels coercive. If I select "no," I seem to be admitting that I don’t want to protect America, and who doesn’t want that? The survey question isn’t biased in this specific example; the response choices are. They are designed to persuade or lead a person into answering in a specific way. This is what is known as a leading question.
Leading questions are one of the many types of biased survey questions. The problem with biased questions, attributes, and response choices are that they create inconsistent, contradicting, and misleading results.
When surveys are created properly, without bias, they are found to be reliable and valid. This means that:
- The survey measures what it intends to measure.
- It measures it consistently over time.
- Relationships between variables in the survey are found in the expected direction (positive, negative, or no relationship).
Biased survey questions wreak havoc on the reliability and validity of the survey, which produces junk data. Biased questions increase respondent confusion, which then increases error in their responses. This, in turn, reduces the strength of the relationships between variables, making it much harder to find results when results are expected. In worst-case scenarios, biased questions can return results that may be untrue which then favors a specific outcome.
So what can we do to avoid bias in our surveys? Below, I discuss several types of biased questions, how they influence the results, and how to avoid them.
Substantially Long Questions
How would you rate your overall satisfaction (on a scale from "1" meaning "very unsatisfied" to "5" meaning "very satisfied") with how well the customer service support staff utilized their resources to resolve your most recent telecommunications problem with company X?
Very lengthy questions like the one above can cause respondent confusion, fatigue, and boredom, resulting in higher dropout rates and straight-lining (selecting all 5’s as answers, for example). To combat this, keep questions short and to the point. Write clearly and concisely.
Better: Please rate your satisfaction with how well your most recent problem was resolved
Please select the types of items you bought within the last three months based on YOLO.
Using obscure words, slang, acronyms, or technical language, can also cause respondent confusion, fatigue, and boredom, again resulting in higher dropout rates and straight-lining. Using everyday language and defining technical language/acronyms will cut down on error-filled responses.
Better: Please select the types of items you bought within the last three months based on the feeling that "you only live once."
Do you oppose allowing the board to prohibit article 10 on the ballot?
Often, when reading a question that has a double negative, our automatic response is… WHAT???? What is this question even asking? In order to interpret this question, one must conduct mental gymnastics to untangle the word mess. Respondents encountering this type of question are likely to be very confused, not knowing what is being asked of them. They may guess at an answer, select the same answer they chose for the question above it, or drop out of the survey.
Instead of using two negatives such as “oppose” and “prohibit” in the same sentence, the question should be reworded in a way that gets straight to the point and omits the double negative.
Better: Should the board ban article 10 from the ballot?
Hot chocolate is my favorite drink because it keeps me warm in the winter.
Double-barreled attributes typically combine two ideas into one sentence which the respondent then has to rate. The problem is that double-barreled attributes assume things are true about some people while marginalizing others. The example above assumes that everyone drinks hot chocolate and that they drink it because it keeps them warm in the winter. Perhaps the respondent hates hot chocolate. If they disagree, are they disagreeing that the hot chocolate is their favorite drink or that it keeps them warm in the winter? Perhaps the respondent loves to drink hot chocolate year round. Perhaps the respondent lives in a climate that does not experience winter temperatures, but they still favor hot chocolate. What does it mean if they agree strongly to this statement?
The example above marginalizes people: those who dislike hot chocolate, those who like hot chocolate but perhaps can no longer drink it (e.g., diabetics), those who do not experience winter temperatures, those who drink hot chocolate in the summer, and those who drink other beverages to keep them warm in the winter. How should they rate this statement? Double-barreled attributes bias the data because there is no way to interpret the resulting responses. To quell the bias here, we need to split double-barreled attributes into two or more statements.
Better: Hot chocolate is my favorite drink.
AND: I drink hot chocolate to keep me warm in the winter.
How angry are you with the democrats for the government shutdown?
Leading questions are the most nefarious of all biased questions. These questions are worded to elicit particular answers. In essence, leading questions coerce a specific response. They are the cousin of falsifying data, and they produce results that lie.
Because the question above directly asks how angry you are with the democrats, it implies that you should be angry with them, and that you should blame them for the shut down, even if you disagree. It further assumes that everyone blames the democrats for the shutdown AND that everyone is angry with the democrats for it. When this question is given in a survey, those who believe someone else is responsible for the shutdown are still forced to answer how angry they are with the democrats for causing it. If one were to ask this question in a survey with a "1" (not angry) to "5" (very angry) scale, one could conclude from the results that 100% of the participants blame the democrats for the government shut down. They could also conclude that X% of the respondents are very angry about it.
Instead of implying or telling the participant what to think and feel, the survey question should ask them what they think and feel.
Better: How angry are you with the government shutdown?
AND: Who do you believe is responsible for the government shutdown?
As the saying goes, “Garbage in, garbage out.” As researchers, we need to take painstaking care to make sure that we are creating surveys which contain unbiased questions that will give us good, clean, unbiased data. In this way, we can be sure that when we present the results to our clients, our results do not lie.