Have you ever thought some of your survey responses do not make sense (i.e. – two people have the same exact answer to an open-ended question)? It is quite possible that bots have completed your survey.
Bots are automated scripts that complete a survey multiple times in order to:
- Collect data from them, to sell or use for malicious reasons.
- Interrupt surveys.
- Skew results and potentially hurt competitors.
- Transmit malware and viruses through links.
- Test security protections.
- Collect survey incentives.
- Gain a commission.
Basing decisions on skewed survey responses can create unnecessary problems. To prevent skewed results, try:

- Using the “Completely Automated Public Turing test to tell Computers and Humans Apart” (CAPTCHA) – but remember that your survey software/website might not support CAPTCHA.
- Avoid posting surveys on social media sites.
- Require people to contact you for a survey link.
- Providing a one-time personal link to the survey.
- Creating trap questions, e.g. multiple-choice queries that let survey creators know respondents read the survey questions.
- Asking the same questions multiple times.
- Having a honeypot field that bots (but not humans) can see, such as a field with a red font on a red background.
- Avoid offering an incentive.
- Asking survey respondents to verify their e-mail addresses.
If you are afraid that bots may have completed your survey, you need to review your responses to exclude the bots’ answers. You will never know for sure if a response was created by a bot, but you can determine probable bot activity. Look at:
- Length of survey completion. A very short length could indicate a bot.
- When surveys were completed. Many responses at the same time could mean bots.
- IP addresses. If multiple responses came from the same IP addresses, they could indicate bots.
- Answers to certain questions:
o Trap questions – If there are incorrect answers (i.e., choosing B, not C, as an answer to a question asking survey respondents to choose C), a bot may be responsible for the answers.
o Repeat questions – Different answers could indicate a bot.
o Honeypot field questions – The answer would come from bot.
o Open-ended questions – See if there are unexpected duplicative or very similar answers. And if the response to a question asking, for example, about lions at a zoo is “The lion is blue and lives in Kalamazoo,” the answer most likely came from a bot.
o Personal questions – Inconsistent answers to these questions could indicate bots.
Miriam Edelman, MPA, MSSW, is a Washington, D.C.-based policy professional. Her experience includes policy work for Congress. Miriam’s undergraduate degree is from Barnard College, Columbia University, with majors in political science and urban studies. She has a master’s in public administration from Cornell University, where she was inducted into the national honorary society for public administration. She has a master’s of science in social work (focusing on policy) from Columbia University. She is a commissioner of the DC Commission on Persons with Disabilities. Miriam aims to continue her career in public service. She is especially interested in democracy, civic education, District of Columbia autonomy, diversity, health policy, women’s issues, and disabilities.
Leave a Reply
You must be logged in to post a comment.