Should We Care About Non-Response Bias?

Non-response bias occurs when people who participate in a research study are inherently different from people who do not participate. This bias can negatively impact the representativeness of the research sample and lead to skewed outcomes. Basing important business decisions on research conducted using non-representative respondents is potentially disastrous.

Non-Response Bias

Non-response bias does not receive much attention outside the classroom. In grad school, I remember studying this bias along with many others that arise in human subjects research. In the two decades of my career, I have been asked about non-response bias only once. It’s not a popular topic.

We know a lot about our research participants via observation, either actual viewing of behaviors or examining survey responses. But it is difficult to determine psychographics or behaviors among non-participants (we have zero information by definition). We can hypothesize why they might choose not to participate...too busy, don’t care, didn’t know about the research opportunity, don’t like giving opinions, forgot to respond, concerned about privacy, and so on. But these are only guesses. We have some idea that these respondents may be different attitudinally and behaviorally from research participants. Again, we truly do not know how these differences might impact research findings.

Researchers are a conscientious lot and seek to obtain a representative sample for their studies (qualitative and quantitative alike). But let’s face it. Pretending that non-response bias does not exist is easier than acknowledging and addressing it. Tackling non-response bias is a non-starter for one very compelling reason — it ranges from difficult to impossible to counteract without a large investment of time and budget. In today’s “fail fast” environment, the prevailing attitude is to ignore this bias and move on. We don’t know what we don’t know, so why does it matter in the grand scheme of things?

This attitude isn’t shared by all institutions. One salient example is the U.S. Census Bureau, which conducts, among many other assessments, a full count of the population every 10 years. Although required by law to respond to the survey forms mailed to each household in the nation, about 30% of households do not participate. The Census Bureau takes its mandate seriously and sends workers to administer the survey to non-responders. These workers visit the same household up to six times in order to achieve a completed form.

The Census Bureau can rely on in-person survey administration to decrease the non-response bias because it operates with a hefty budget (the 2010 budget was over $5 billion USD). Accurate Census results lead to better representation in Congress and more equitable allocation of $100+ billion in federal funds. You could say that the Census Bureau’s attention to non-response bias yields a tremendous return on investment.

Realistically, though, market research budgets of $5+ billion and project timelines that extend several months are difficult to come by. There are, however, strategies that may mitigate the impact of non-response bias that do not require large budgets and longer time periods.

  • Avoid methods that may discourage participation from certain groups. Does the sample have qualities that make it difficult for them to respond via particular methods? Online surveys are very popular research tools because they are relatively cheap and quick. But online surveys might not be the best choice, for example, for lower-income, elderly respondents. Using an online method may result in a sample comprised more of higher-income, younger consumers who have different values, beliefs, and opinions than their older, lower-income counterparts.
  • Remind respondents about the initial research participation and let them know when to respond by. Consider using both email and text reminders. This increases the likelihood that procrastinators and those who don’t like to check email (Millennials and Gen Z) will be included in the final sample. This also means leaving the survey open for several days to a week to allow respondents who do not check their email daily to respond.
  • Keep surveys short and/or conduct pre-testing to ensure that the incentive for participation aligns with expectations. One method would be to use a short-form survey that consists of a few key questions and then provide an opportunity to give additional feedback on a longer-form survey. Before the key questions are combined, statistical testing would determine if those who answered the short-form survey gave different responses than those who answered the longer-form survey. I suspect that there would not be a difference in most cases. Such a methodology would theoretically provide confidence that some of the non-response bias was accounted for.
  • Use multiple approaches. Mixing methodologies is a strong tactic to prevent non-response bias. Individuals who are reluctant to participate in a group qualitative study may be more comfortable expressing themselves in an online bulletin board, for example. A well-known snack and beverage company uses several methods to promote participation. Not only do they employ the usual online and central location approaches, they send mobile research units to inner city locations in order to elicit feedback from consumers who would otherwise not be included.

Completely removing non-response bias from research is an impossible task. Regardless, researchers should strive to do so. Stakeholders may not fully appreciate the effort required to reduce bias. I’m certain, though, that all of us appreciate having more confidence that business decisions based on cleaner, more representative research are more likely to be successful.

Author

Elizabeth Horn

Elizabeth Horn

Senior VP, Advanced Analytics

Beth has provided expertise and high-end analytics for Decision Analyst for over 25 years. She is responsible for design, analyses, and insights derived from discrete choice models; MaxDiff analysis; volumetric forecasting; predictive modeling; GIS analysis; and market segmentation. She regularly consults with clients regarding best practices in research methodology. Beth earned a Ph.D. and a Master of Science in Experimental Psychology with emphasis on psychological principles, research methods, and statistics from Texas Christian University in Fort Worth, TX.

Copyright © 2018 by Decision Analyst, Inc.
This posting may not be copied, published, or used in any way without written permission of Decision Analyst.