Questionnaire Writing: Negatively Worded Items

It’s Time to Put Those Negatively Worded Items Behind Us

To date, researchers use negatively worded attributes sprinkled throughout groupings of positively worded attributes for the purpose of catching cheaters, speeders, and straight-liners.

As an example, see the list below. The negative attribute is marked with an asterisk.

How much do you agree or disagree with each statement below? (1=Strongly Disagree to 5=Strongly Agree)

  • I enjoy the winter
  • I like it when it snows
  • I prefer cold weather
  • I dislike winter weather*
  • I feel the happiest when it’s cold

Negatively worded attributes are believed to act as speed bumps—slowing respondents down to make them think more thoroughly about the questions they are answering—and as a net to catch respondents who are mindlessly scrolling through the survey, pushing buttons.

Although researchers catch undesirable responding with negatively worded attributes, several downsides to using this methodology have been discovered. First, researchers are unable to discern whether the response is real, a product of misunderstanding/misinterpretation; or if the response is false, a product of the respondent mindlessly pushing buttons. Second, scientists (Ickes et al. 2018) have found that using negatively worded attributes in a survey reduces the reliability of the survey, reduces the cohesiveness of the attributes within a factor, and causes context switching. Third, context switching causes respondent confusion, which creates error.

What is context switching? Context switching is a cognitive process in which the respondent repeatedly reframes the attributes in an attempt to answer consistently. In fact, Ickes et al. (2018) explain that including negatively worded attributes in a survey is less like a speed bump and more akin to “a sudden and unexpected U-turn.” In other words, respondents make a 180° metaphorical U-turn every time they need to switch from interpreting positively worded attributes to negatively worded attributes and vice versa. This causes respondent confusion, and this confusion leads to error-prone data (which is less reliable). In fact, researchers find that respondents’ increased number of re-framings result in a survey’s decreased internal consistency; meaning that the more negatively worded attributes there are in a survey, the more context switching is required by the respondent, which increases error in the data.

Examining data from surveys with negatively worded attributes often reveals that the negatively worded attributes are the lowest-loading attributes on a factor. Many times, when examining a factor analysis where we expect to find a one-factor solution, we find a two-factor solution with the negatively worded attributes loading onto their own factor.

These findings point to a single conclusion: it may be time for researchers to relinquish negatively worded attributes. So, how can researchers catch cheaters, speeders, and straight-liners if negatively worded attributes are no longer included in the survey? Using a couple of cheater questions throughout the survey such as “Please select strongly agree for this response” or “Please select orange for this response,” solves this problem. Cheater questions are especially important to use with grids where many attributes are evaluated on the same scale. The cheater question will catch those who are not paying attention, speeding through the survey, or straight-lining, and their data can be removed—thus, diminishing the need for negatively worded attributes.

If you would like to read more about this topic, please see the paper below:

Ickes, W., Babcock, M., Hamby, T., Park, A., Robinson, R., Taylor, W. (2018). Side streets and U-turns: Effects of context switching, direction switching, and factor switching on interattribute correlations and misresponse rates. Journal of Personality Assessment, doi: 1080/00223891.2018.1450262

Author

Audrey Guinn

Audrey Guinn

Statistical Consultant, Advanced Analytics Group

Audrey utilizes her knowledge in both inferential and Bayesian statistics to solve real-world marketing problems. She has experience in research design, statistical methods, data analysis, and reporting. As a Statistical Consultant, she specializes in market segmentation, SEM, MaxDiff, GG, TURF, and Key Driver analysis. Audrey earned a Ph.D. and Master of Science in Experimental Psychology with an emphasis on emotional decision-making from The University of Texas at Arlington.

Copyright © 2021 by Decision Analyst, Inc.
This posting may not be copied, published, or used in any way without written permission of Decision Analyst.