Group of friends all using smart phones

Increasing Respondent Engagement: Measuring the Impact of Survey Design

 /  White Paper


A new study by TrueSample® shows that simple survey design rules based on a single paradigm (such as “long surveys reduce respondent engagement”) are inadequate. It is the complex interaction among design variables that drives respondent engagement within a survey.

TrueSample has scrupulously studied how survey questionnaire design influences the way respondents perceive a survey and answer questions. By quantitatively measuring the impact and the interplay of design variables, TrueSample provides a more enlightened way to maximize overall engagement and minimize “bad behavior.”

Researchers have long suspected that survey design influences the way respondents perceive a survey—and how they behave while answering it. For example, it seems natural that long and complex questionnaires will result in a negative perception of the survey and increase the likelihood of “bad behaviors” such as speeding and partial completes.

But there has never been any in-depth quantitative measurement to either confirm or refute the many hypotheses that have emerged over the years about the impact of questionnaire design on behavior, or which variables of design are most important. Instead the market research industry has proffered such axioms as “keep the survey length under 15 minutes” or “limit the total number of questions to 25.”

Survey designers need guidance based on data, not assumptions. They need quantitative measurement of the impact of various parameters of survey design on respondent perception and behavior, and how these variables influence each other.

TrueSample is the only company that has meticulously analyzed this issue. Leveraging measurements from TrueSample SurveyScore,TM a survey engagement scoring technology that is part of TrueSample’ TrueSampleTM technology, TrueSample researchers have analyzed respondent engagement measures across a wide range of design variables.
TrueSample’ research reveals that many intuitive assumptions simply are not true or are misleading.

There is no “silver bullet” design rule that predicts engagement; in fact, many parameters are at work simultaneously—and these must be examined together. This paper provides an overview of the study’s methodology and findings, and it summarizes the ongoing research that is guiding further enhancement of TrueSample products and services.

There is no “silver bullet” design rule that predicts engagement; in fact, many parameters are at work simultaneously— and these must be examined together.


Harnessing TrueSample SurveyScore Measurements to Quantify the Impact of Design Variables on Overall Engagement

In early 2008 TrueSample launched TrueSample, the industry’s first technology for ensuring the quality of online market research. TrueSample automatically identifies and excludes from surveys any fake, duplicate, and unengaged respondents. One element of TrueSample is a tool called SurveyScore, which measures the quality of the survey instrument itself and enables researchers to see the impact that survey design has on respondent engagement.

With the launch of TrueSample, TrueSample began measuring the quality of every survey deployed on its TrueSample.comTM survey platform in an effort to understand how the elements of survey design effects respondent engagement. Because the level of engagement is determined by both experiential and behavioral variables, TrueSample SurveyScore uses a formula that benchmarks both respondent perception and behavior within online surveys.

Figure 1. TrueSample’s SurveyScore provides a respondent engagement score for each survey.

A SurveyScore is assigned to every TrueSample-enabled survey. This score can then be used for comparison with other surveys—from one organization or many organizations. It provides a way to consistently evaluate the impact of various variables on respondent engagement across surveys, in varying product categories, and across different research methods.

To date, TrueSample has collected SurveyScore data for more than 3,000 surveys, with over 1.3 million completes. These surveys span product categories—such as food and beverage, financial, technology, entertainment, health and beauty, healthcare, and travel—and research methods, such as concept screening, line and package optimization, and attitude and usage studies.

TrueSample is now using the SurveyScore data to quantify the connections among survey design, respondent engagement, and data quality. Recent research examines the ability of survey design variables to predict respondent perception and behavior, and thereby predict SurveyScore, and to understand which design variables might be harbingers of potential issues.

Key Finding

There Is No Silver Bullet—Design Variables Work in Concert

The TrueSample research sought to determine whether certain survey design variables could reliably predict respondent behavior and perception. Predictability is an indication that the way you design the survey will influence engagement in a consistent way, enabling you to avoid or minimize adverse effects.

Specifically, TrueSample modeled the impact of more than 20 design variables (independent variables) that are within the control of survey designers—such as survey length, and total word count—on several respondent engagement measures (dependent variables) reflecting the respondents’ perception of the survey and behavior during the survey.


Sample of Independent and Dependent Variables Included in Study

A key finding was that survey design variables had a predictable impact on respondent engagement. The fact that the impact of several of these variables is predictable provides a clear indication that survey design directly influences respondent perception and behavior, i.e., engagement, in a consistent way. This means that survey designers do have some degree of control in maximizing positive engagement and avoiding or minimizing adverse engagement. This also means that the SurveyScore can be predicted prior to deploying a survey.

The fact that the impact of several of these variables is predictable provides a clear indication that survey design directly influences respondent perception and behavior, i.e., engagement, in a consistent way.

Another interesting finding was that whereas survey length proved to be generally predictive of most respondent engagement measures, there was wide variation in the independent design variables that were most influential in driving the prediction. For example, for the survey rating measure, one of the most predictive design variables, was the elapsed time per page of the survey. For the speeding measure, however, elapsed time per page was not even in the top five most important design variables.

Figure 2. Variation in importance of predictive variables across the different engagement measures.

Figure 2 shows how the importance of specific design variables varies significantly for each engagement measure. While survey length is relatively important for most engagement measures, it is clear that adjusting just one parameter may not be sufficient to elicit desirable behavior from respondents, nor will it singlehandedly improve their perception of the survey-taking experience. Instead the findings reveal that engagement is driven by a complex interaction among design variables.

This means that simple survey design guidelines or rules are inadequate for motivating the desired respondent engagement. This implies that there is no single design variable that consistently affects the respondents’ perception or behavior the most; in other words there is no axiom that applies in all cases, such as “surveys that require more than 20 minutes result in poor respondent engagement.”

In fact, TrueSample researchers uncovered several examples of long surveys that had a higher-than-normal survey rating as well as a lower-than-normal partial rate, which would run contrary to what one would expect if length alone were a deciding variable. Conversely, we found examples of short surveys that had a lower-than-normal survey rating because of the design of other variables.

This implies that there is no single design variable that consistently affects the respondents’ perception or behavior the most; in other words there is no axiom that applies in all cases, such as “surveys that require more than 20 minutes result in poor respondent engagement.”

Figure 3. Example surveys showing that survey length is not consistently predictive of SurveyScore.

A recent survey created and deployed by the ARSgroup, a TrueSample customer, provides another interesting example. The survey contained videos, and the overall survey length was about 40 minutes, much higher than the norm. The survey had a high average survey rating of 4.65 on a scale of 1 to 6, however. This rating placed it in the 95th percentile for surveys longer than 30 minutes.

The 4.65 survey rating also outperformed the estimate of 4.47 predicted by the TrueSample model. And when looking at some of the other predictive variables for this survey, researchers noted that the values of these variables were on the lower end of the spectrum for surveys longer than 30 minutes.

Clearly, there were other factors at play, such as the engaging videos and the varied question types that overcame the long survey length in creating a relatively positive experience for the respondents.

Figure 4. Screenshots from the ARSgroup survey, showing the use of videos and follow-up questions.

Details on Predictions of Specific Aspects of Engagement

As mentioned earlier, understanding overall respondent engagement requires an assessment of both the respondents’ perception of the survey and their behavior during the survey. The TrueSample research revealed that a multivariate model that captures the complex interaction among design variables is able to predict each of the experiential and behavioral variables quite well. The survey design variables of significance in the prediction of each of the engagement measures however, are different from one engagement measure to the next. To illustrate the dependency of the engagement measures on the design variables, we selected three of the engagement measures for a more detailed exploration. The following sections provide additional detail about how on each of these three engagement measures is effected by the survey design variables:

  1. Survey rating
  2. Partial rate
  3. Percentage of pages on which the respondent sped
1. Survey Rating

The survey rating indicates how the respondent perceived the survey experience. Respondents were asked to rate, on a scale of 1 to 6 with 6 being the most favorable, how the survey-taking experience compared with previous surveys that the respondent had taken. The overall distribution is shown in Figure 5.

Figure 5. Distribution of survey rating: Most respondents rate surveys higher than 3.5, which is the midpoint between 1 and 6.

When researchers studied the design variables that affected the survey rating—the number of pages, the number of questions per page, the total word count, and so on—they found some interesting relationships.

Figure 6. Impact of time per page on survey rating.

Respondents tend to favor surveys where, on average, less than 25 seconds (or about four-tenths of a minute) are spent per page.

Figure 7. Impact of survey length on survey rating.

The survey rating drops steadily with the survey length with the survey rating norm at about the 19 to 20-minute mark.

Figure 8. Impact of matrix question percentage on survey rating.

Surveys in which 10 to 15% of the questions are in matrix form tend to be rated higher than surveys with a greater percentage of questions in matrix form.

2. Partial Rate

The partial rate indicates the number of respondents who abandoned the survey before completing it. Although many factors can increase the partial rate, a rate higher than the norm can point to possible issues with the design, especially if the rate can be predicted to within a reasonable degree by design variables. Please see figures 9 to 11 on the following pages.

Figure 9. Impact of time per page on partial rate.

As the average time per page increases, so does the respondents’ tendency to drop out of the survey. Surveys with pages that take more than 20 seconds on average to complete tend to have higher- than-normal partial rates.

Figure 10. impact of survey length on partial rate.

Longer surveys also cause respondents to drop out more than shorter ones do. Surveys longer than 15 to 20 minutes tend to cause partial rates that are higher than the norm.

Figure 11. Impact of matrix question percentage on partial rate.

Having more than 15 to 18% of the questions in matrix form also increases partial rates, corresponding well with the decreased survey rating at about this mark.

3. Percentage of Pages Sped

A respondent is considered to have sped on a page if he or she completed the page faster than two standard deviations from the typical time spent on that page. The higher the percentage of speeding, the lower the level of engagement. Please see figures 12 and 13 on the following page.

Figure 12. Impact of matrix question percentage on percent of pages sped.

More speeding than normal occurs when the survey has more than 10 to 15% of the questions in matrix form.

Figure 13. Impact of survey length on percent of pages sped.
Figure 13. Impact of survey length on percent of pages sped.

When the survey length goes beyond 20 minutes, the level of engagement decreases, as respondents tend to speed on a greater percentage of pages.

While not all design variables or engagement measures are shown here, the results of this study indicate that all design variables, in general, have a similar nonlinear relationship with the engagement measures.

Summary of Conclusions and Next Steps

The ability to predict the effect of various survey design variables on respondent engagement has tremendous potential to help survey designers maximize engagement and minimize bad behaviors. The key points that researchers should take away from this analysis are as follows:

  • Simple survey design rules based on a single parameter (such as survey length of) are inadequate for improving respondent engagement. It is the effects of multiple design variables acting together that determines respondent engagement.
  • In many cases, it is possible to compensate for the adverse effects of certain design characteristics through adjustment of others. In other words, a longer survey may not
  • always result in lower engagement levels if there are other engaging features in the survey. Conversely, an overall short survey may have a negative impact on the respondent if other features are not engaging, for example if he or she needs to spend more time working on each page or question in the survey than is typical.
  • Although survey design variables can be used to effectively predict respondent engagement. An accurate assessment of overall engagement requires a multivariate predictive model that looks at many design variables and many engagement measures to predict overall respondent engagement and data quality.

The ultimate goal for TrueSample is to offer clients deeper insights and guidance for designing better surveys—leading to more-reliable, meaningful results and sharper, more impactful business decisions. To that end TrueSample is conducting additional studies aimed at evaluating the effects that various design variables and engagement measures have on the quality of data. TrueSample is also continuing to enhance the capabilities of SurveyScore to predict respondent engagement prior to deployment of a survey. By quantifying and predicting the true impact of survey design on respondent engagement and data quality, TrueSample will dispense with the myths and the hypotheses surrounding better survey design and give researchers the tools they need to ensure reliable and accurate research results.

  Back to All Articles
Like this article? Share it!