grafitti

When Good Respondents Go Bad: How Unengaging Surveys Lower Data Quality

 /  White Paper

Overview

The impact of questionable online survey respondents on data quality is well-documented. In a previously published paper, TrueSample proved that fake, duplicate or unengaged respondents compromise data quality. But what about the design of the survey, which may affect all respondents, with both good and bad intentions? TrueSample has conducted the first ever comprehensive study that examines the effect of survey design on data quality. Based on response data from over 800,000 respondents, TrueSample’s latest research provides conclusive results that show:

  • Survey design influences respondent engagement.
  • Lower respondent engagement increases the risk of unreliable survey data.

The implication is clear: To ensure the quality of research data, researchers must not only remove ‘bad’ respondents from their samples, but they must also design surveys that keep the good respondents engaged.

Survey Design, Engagement, Quality: What’s the Connection?

Experienced researchers have long assumed that survey design, respondent engagement, and data quality are interrelated. For example, it seems obvious that long and complex questionnaires will increase the likelihood of “bad behaviors” such as speeding and survey abandonment, and that data quality will suffer if there is a high percentage of unengaged respondents in the survey sample.

But in-depth, quantitative measurement of the relationship between survey design and research data quality was not available—until now.

In early 2008, MarketTools launched TrueSample®, the industry’s first technology for ensuring the quality of online market research. TrueSample software takes a holistic approach to quality that encompasses both survey respondents and survey design:

  • To ensure the quality of survey respondents, TrueSample eliminates fake, duplicate, and blatantly unengaged respondents from panels and survey samples.
  • To ensure the quality of survey design, TrueSample provides a survey engagement scoring technology called TrueSample SurveyScore®, which measures and benchmarks the quality of the survey instrument itself.

As TrueSample sought to understand and quantify the effect of survey design on respondent engagement and data quality, we leveraged TrueSample SurveyScore® measurements from over 1,500 surveys and 800,000 responses to conduct a 2-phased research study with the goal of answering the following questions:

  1. Does survey design really impact respondent engagement?
  2. Does respondent engagement have an effect on response data quality?

Phase 1 of our research evaluated whether survey design influences the way respondents perceive a survey and how they behave while answering survey questions. Phase 2 of our research examined the effect design variables and engagement measures have on the quality of response data. Simply put, we sought to determine whether “good” respondents driven “bad” by poorly designed and complex surveys could lead to reduced data quality. If so, we can help researchers to optimize their survey design to improve overall data quality.

wp-wgrgb-1
Figure 1. This research focuses on the effect that survey design has on respondent engagement and resulting data quality.

Understanding the Connection Requires Quantifying Respondent Engagement

TrueSample SurveyScore is the only comprehensive, objective measure of survey engagement available to the market research industry. It is a function of both experiential variables, such as respondents’ rating of the survey taking experience, and behavioral variables, such as survey abandonment and speeding. TrueSample SurveyScore enables researchers to see the impact that survey design has on respondent engagement.

With the launch of TrueSample, MarketTools began measuring the quality of every survey deployed on its MarketTools.comTM survey platform. To date, MarketTools has collected SurveyScore data for more than 10,000 surveys, with over 2.6 million completes. These surveys span product categories—such as food and beverage, financial, technology, entertainment, health and beauty, healthcare, and travel—and research methods, such as concept screening, line and package optimization, and attitude and usage studies.

Because MarketTools is uniquely able to benchmark engagement through TrueSample SurveyScore, we were able to use this data to quantify the connections among survey design, respondent engagement, and data quality for the purposes of this research.

wp-wgrgb-2
Figure 2. TrueSample SurveyScore provides a respondent engagement score for each survey

Methodology: Measuring the Impact of Survey Design on Engagement

The TrueSample research team sought to determine whether certain survey design variables could reliably predict the composite engagement measure of respondent behavior and perception that comprises TrueSample SurveyScore. We built a model to predict engagement using survey design variables and the TrueSample SurveyScore database as inputs. Predictability is an indication that survey design impacts engagement in a consistent way, implying that we could recommend adjustments to the design variables that would minimize adverse effects on engagement.

Specifically, TrueSample modeled the impact of more than 20 survey design variables (independent variables) that are within the control of survey designers—such as survey length, and total word count—on several respondent engagement measures (dependent variables) reflecting the respondents’ perception of the survey and behavior during the survey.

wp-wgrgb-2b

Findings: Survey Design Does Impact Engagement, But There is No Silver Bullet Design Rule

The TrueSample research revealed that a multivariate model that captures the complex interaction among design variables is able to predict overall engagement, comprised of both experiential and behavioral variables. The fact that the impact of these variables is predictable provides a clear indication that survey design directly influences respondent perception and behavior, i.e., engagement, in a consistent way. This means that survey designers do have some degree of control in improving engagement. This also means that the SurveyScore can be predicted prior to deploying a survey to help guide design modifications.

wp-wgrgb-3
Figure 3. Engagement Can Be Predicted Using Survey Design Parameters

The fact that the impact of several of these variables is predictable provides a clear indication that survey design directly influences respondent perception and behavior, i.e., engagement, in a consistent way.

We uncovered another interesting finding when we examined the influence of particular survey design elements on specific aspects of engagement, such as survey rating or partial rates. While survey length proved to be generally predictive of most respondent engagement measures, there was wide variation in the design variables that were most influential in driving various measures of engagement. For example, for the survey rating measure, one of the most predictive design variables was the elapsed time per page of the survey. For the speeding measure, however, elapsed time per page was not even in the top five most important design variables.

wp-wgrgb-4
Figure 4. The relative importance of survey design variables on engagement depends on engagement measure

There is no single design variable that consistently affects the respondents’ perception or behavior the most; in other words there is no axiom that applies in all cases, such as “surveys that require more than 20 minutes result in poor respondent engagement.”

It is, therefore, clear that adjusting just one parameter may not be sufficient to elicit desirable behavior from respondents, nor will it singlehandedly improve their perception of the survey-taking experience. Instead the findings reveal that engagement is driven by a complex interaction among design variables. This means that simple survey design guidelines or rules are inadequate for motivating the desired respondent engagement. There is no axiom that applies in all cases, such as “surveys that require more than 20 minutes result in poor respondent engagement.”

In fact, TrueSample researchers uncovered several examples of long surveys that had a higher-than- normal survey rating as well as a lower-than-normal partial rate, which would run contrary to what one would expect if length alone were a deciding variable. Conversely, we found examples of short surveys that had a lower-than-normal survey rating because of the design of other variables.

wp-wgrgb-5
Figure 5. Example surveys showing that survey length is not consistently predictive of a composite engagement measure such as TrueSample SurveyScore.

Methodology: Measuring the Impact of Engagement on Data Quality

With the impact of survey design on respondent engagement established, the TrueSample research team endeavored to determine whether engagement had an effect on data quality. The TrueSample SurveyScore database allowed us to effectively test this hypothesis.

TrueSample fielded three surveys with varying levels of complexity, categorized as “Moderate,” “Medium,” and “High.” We analyzed 1000 completes for each survey. The experimental surveys had the same series of questions about demographics, products purchased, etc. but differed contingent on the number of products respondents said they purchased. The level of complexity increased as more products were chosen and more brand attribute questions were displayed. In the Moderate category, respondents were asked one question per product. In the Medium complexity category, respondents received 17 brand attribute questions per product. In the High complexity category, respondents were asked 17 questions for every product chosen, plus additional open-ended questions.

We computed and compared the SurveyScore for the three surveys. Predictably, the SurveyScore dropped precipitously with the higher complexity levels. The Medium and High complexity surveys received an extremely low SurveyScore, as shown below.

wp-wgrgb-5b

As expected, the more complex surveys in our research had a lower SurveyScore indicating lower respondent engagement.

Next, we conducted a series of statistical tests to evaluate the effect of respondent engagement on data quality. By conducting different analyses, we were able to examine data quality from various angles for a more comprehensive review. Specifically, we investigated the following.

Will Respondents in Unengaging Surveys:

  • Increase the odds of sample bias?
  • Be apt to answer the same question inconsistently?
  • Be more prone to random answer choices?
  • Be more likely to provide inconsistent answer choices?
  • Tend to select “None” as an answer choice? 

Finding: Higher Abandonment Rates in Unengaging Surveys Increase the Odds of Sample Bias

We examined whether a high abandonment rate could cause bias in completed responses and thereby reduce overall data quality. In other words, as the surveys became more complicated and their SurveyScore dropped, did the makeup of the respondents change and create the potential for biased data?

The answer was yes. As illustrated in the diagram in Figure 6, respondents who completed the Medium or High Complexity surveys were more tolerant of the increased question load (the more products they selected, the more questions they were asked), leading to bias in those groups compared to the group of respondents who completed the Moderate survey. The graph on the left of Figure 6 shows that as the number of products selected increased, thereby increasing the number of questions to be answered, the partial or abandonment rate grew for the more complicated surveys.

wp-wgrgb-6
Figure 6. Survey complexity led to higher abandonment rates, causing biased results for the more complex surveys.

As shown in the graph on the right of Figure 6, of those respondents who did not abandon the survey, the percentage who selected five products was much lower for the Medium and High Complexity surveys than it was for the Moderate survey. So, while the actual data had a higher percentage of respondents that had purchased 5 products, many of these did not make it through the survey, resulting in sample bias.

Finding: Respondents Answer the Same Question Inconsistently as Engagement Drops

Our research also tested whether the respondents’ ability to answer the same questions consistently during a single survey was a function of the survey’s complexity. We measured the consistency of the responses to questions that were repeated in separate sections of the survey, and we found that Recall Discrepancies increased as the SurveyScore dropped—a clear indication that more complicated surveys lead to inconsistent and unreliable responses and lower data quality.

wp-wgrgb-7
Figure 7. Respondents were more prone to recall errors in complex (less engaging) surveys.

Finding: Degree of Randomness in Responses Increases Noticeably as Engagement Declines

TrueSample then measured the consistency of responses across all possible question pairs to develop an inconsistency metric. This metric enabled us to determine if a given selection was random or closer to the expected response. The more unusual this pairing was, meaning the likelihood of its occurrence was low given the incidence of all the other options for these questions, the higher the departure from the expected value and the higher the inconsistency metric. Our finding was that inconsistency increased as SurveyScore dropped, contributing to lower overall data quality for the more complex surveys.

wp-wgrgb-8
Figure 8. Response inconsistency, an indication of randomness, is substantially higher for the more complicated surveys.

Finding: Less Engaging Surveys Result In Loss of Focus and Inconsistent Responses

Finally, TrueSample sought to determine if surveys with a low SurveyScore caused respondents to lose focus and provide inconsistent or unpredictable responses. To measure the choice predictability of each of the surveys, we used a discrete choice model (DCM) exercise. Specifically, we tried to predict respondents’ product selections on two tasks based on their selections on seven other tasks (DCM sections were identical across all surveys). We asked, for example, that respondents select the one product they would prefer to buy from each page, if any, and based on their answers to previous questions, we tried to predict their response. The respondents could also choose “None” as a response, indicating that they would choose none of the products.

wp-wgrgb-8b

During this exercise, we noticed that the accuracy of the prediction (when the selection of “None” was also included) was 75-79% for all surveys, a relatively high prediction rate. However, the model for the Medium and High Complexity surveys gave a much greater emphasis to the “None” selection, meaning that the respondents for these surveys tended to select no product, as opposed to one of the available products. Once we removed the “None” option from our model, the prediction accuracy dropped significantly for the High Complexity survey. In addition, the lower scoring surveys had more violations in price selection order, meaning the respondents tended to violate the expected order of selecting a lower unit price over a higher one. The net result: surveys with a low SurveyScore translated to lower predictability and thus to lower data quality.

wp-wgrgb-9

Conclusion

Researchers Must Take Responsibility for Data Quality by Removing Bad Respondents and Designing Surveys that Keep Good Respondents Engaged

Research professionals now have evidence that survey design not only influences whether respondents abandon a survey, but also impacts the data for those who complete it. The ability to predict the effect
of various survey design variables on respondent engagement will help survey designers maximize engagement to increase the reliability of their data. Researchers no longer have to assume that a long survey will jeopardize the quality of the results, since we have shown that it is possible to compensate for the adverse effects of certain design variables by adjusting others. By using engagement measurement and prediction tools like TrueSample SurveyScore, Researchers can know that survey design affects data quality, can measure engagement to help improve survey design, and optimize design to enhance the reliability of results.




  Back to All Articles
Like this article? Share it!