Response Quality uses machine learning to scan your survey results and flag poor-quality responses. For example, if a survey-taker fills a comment box with gibberish, or answers option A on every question, we’ll flag the responses so you can focus on high-quality survey data.
A survey needs to be in English and have more than 50 responses, and the response status must be complete in order to see Response Quality results. Survey responses collected before March 2nd, 2020 can’t be reviewed for quality.
To turn Response Quality on:
If your survey is still collecting responses, we’ll automatically rescan new responses for quality about every 24 hours.
Once your responses are scanned, you can scroll through Individual Responses to find responses flagged as Poor Quality.
Our machine learning models are trained to look for a few different types of poor-quality survey responses:
|Profanity||The survey response includes a curse word.|
|Straight-lining||Multiple questions have been quickly responded to with the same answer option or in a pattern. For example, the survey-taker chose option B for every question.|
|Speeding||The survey-taker took significantly less time to complete the survey than other people.|
|Gibberish||The response includes a text response primarily made of nonsensical words, like "asdfjkl".|
|Length||The response is significantly shorter than other responses.|
|Not a Full Word||The response includes a 1-character answer that's not a word.|
|Copy-Pasted Answer||The response matches the question text.|
When you export a CSV, XLS or SPSS file, your flagged responses will appear in a separate file. Your Response Quality file will show the respondent ID and flag types for responses identified as poor.