, ,

10 Best Practices to Utilize When Designing an Effective Survey

Evaluations are becoming an important part of almost every government activity, especially the administration of grant programs. Many grant program evaluations use surveys to collect information from both benefactors and beneficiaries. For these surveys to be effective, they should follow a few simple best practices outlined below.

1. Brief, Smooth Introduction: This should provide the respondent with background information on the survey, including why it is being conducted and who is distributing the survey.

2. Easy, Nonthreatening Start (and Closed-Ended Questions): This will warm the respondent up for harder questions asked later in the survey. Further, placing difficult questions at the front often “turns-off” the respondent, leading to a lower response rate.

3. Delay Sensitive Issues until Later: Respondents are more likely to answer these questions if they are later in the survey.

4. Demographics Last: Demographic questions are the most sensitive to ask. Placing them at the end maximizes the sense of anonymity and makes the respondent more comfortable.

5. Short Transitions: When changing the question format or subject, it is important to include a lead in question that transitions the respondent into the next set of questions. This will diminish confusion and increase the flow of the survey.

6. Consistent Series Answer Format: Continuity in answer types increases the respondents ability to quickly finish the survey and decreases potential confusion. This will increase the overall response rate of your survey.

7. Limited Use of Open-Ended: Open-ended questions lead to a number of issues including damaging the response rate and causing coding and interpretation issues. It is best to limit these within surveys. It is best to only use these types of questions for new topics that have not been researched in the past or employee venting.

8. Length as Short as Possible: Respondents are more likely to complete a short survey over a longer survey.

9. Extensive Testing and Polishing: Testing the survey will eliminate any potential issues that may arise that could decrease the validity of the survey. This is extremely important as a decreased survey validity leads to issues with the entire evaluation.

10. Fair Question Order Framing Sequences: It is important to ensure that you are not ordering the questions based on biases. Be sure to look over the questions multiple times and have numerous individuals review the survey beforehand to ensure that you are not placing personal biases into the survey that could distort results.

What Do You Think?

What are some other good practices to utilize when constructing surveys?

Leave a Comment

4 Comments

Leave a Reply

Profile Photo Mark Hammer

To your list I would append the following:

1) Mimic “the paper experience” as much as you can. With everyone migrating to web-based surveys, we seem to quickly forget what it is that people do with paper surveys and how those surveys reflect what they want and need.

If you receive a survey that is perhaps more than a page, you will undoubtedly look ahead to see what it involves, how invasive or interesting, how much of your time it is likely to take up,

You will likely flip back and forth if the survey is of any appreciable length, to see how THIS question is different from THAT one

Questions related to a given theme or topic will be physically/visually grouped in a way that encourages one to think about them cohesively, so that the responses are properly connected

What we’ve done in past is to post a link to a full PDF of the entire survey so that people can complete it, one or two questions per screen, at a time, with whatever software you’re using, but have the whole thing to refer to, as they would a paper document. If you’re willing to put your cards face up on the table, people trust you more.

2) Let people tell their story. Quite often, what people know or remember is coded in their own minds as a sort of narrative, and getting it out of them involves nudging them through the steps that make sense to them within the framework of that personal narrative. First I did this, and then there was that part, and then we did that, etc. And much like starting to tell someone a joke that gets interrupted by a 3rd party and you are strongly motivated to finish telling the joke, once people start to tell you their story, they will want to get to the end and the “punch line”, rather than simply letting things fade out.

3) “Other (please specify)” is a license to not read the question. Unless the list you are providing them is very poorly crafted, what people enter in response to this end-of-list option is generally somethng that is already on the list…if only you could figure out what it is. Sadly, what people type or scribble in is shorthand for something in their minds, but not necesaarily crafted to be crisply delineated in yours. Use an “other” option only if absolutely forced to. When omitting “other”, have the question stem refer to “of the following”, so that respiondents know you are well aware there might exist other possibilities but you aren’t exploring them at the moment.

4) Try to avoid “which of these is most important” questions, especially when the list for ranking goes beyond a couple of choices. WAY to much cognitive burden. Far easier and faster to give people separate rating scales and have them indicate how important/relevant each thing in the list is. They’ll zip right through it, and you’ll be able to calculate the average space between the rankings (i.e., how much less important is B than A?).

5) Let people know where you’re going. Sections, and sometimes even individual questions need to be framed and purposed for the respondent to understand what you’re looking for. This is an expansion of Samantha’s point #1, but goes further. People often approach surveys and other questionnaires like a concept-formation task: “what are they interested in knowing? what are they really getting at?”. Sometimes it is important for them NOT to know what you’re getting at, but often, the more they understand the purpose of Question X, the better quality the data derived. It is true that people will not always read all instructions and preambles, but for those who need to know in order to deliver, you need to provide it. Clarity of mission usually yields better performance.

6) Avoid jargon and policy language as much as possible. Phrase questions on the basis of how people think, NOT in terms of what management or the researcher wants to be able to say at the end of it. A decade or so back, during the preliminary stages of development of what would eventually become the Federal Employee Viewpoint Survey, I received a draft for review, and the first question on the survey asked for an opinion about the “human capital strategy” in one’s agency. I wrote back and noted that the person who cuts the grass on the White House lawn or who sorts the mail would be unlikely to think in such terms. I highly doubt that I have that much influence over OPM, so I imagine I was but one of a great many others who had similar qualms about such phrasing.

7) Gather as much contextual informationas you can, given the intended length. What people tell you isoften, if not always, a reflection of their circumstances. So cogitate a bit about what circumstances might be relevant andask about them. Every bit as important as demographic info…which is itself, a context.

8) Try to include at least one question that shows you “get it”. In other words, you are sensitive to the real-world circumstances they operate under, now matter how awkward to discuss. Kind of a “Hey, it’s ME you’re talking to” item. That question can be prefaced by disclaimers to give the respondent “permission”. The net effect is to increase perceived face validity. Not always called for, but sometimes exactly the right thing to do.

9) Start with a model. Government surveys of citizens or other external stakeholders, or employees, can all too often end up as “string balls” designed by committees that gather up everybody’s pet idea of what ought to be in it, with minimal coherence, and sometimes not enough information about what you really need to know. All good research starts with a hypothesis, framed inside of a model. Work out your circles and arrows model and predictions in advance, so that you can compare the resulting list of items against that model and determine if you have enough info for each of those circles to address the potential connections between them.

10) Try to think in terms of how hard it will be for people to get to their answer. Do the choices you’ve offered, including any scales employed, let the person identify their response quickly and easily? For example, a symmetrical 10pt scale might provide greater precision, but be perceived as burdensome by the respondent who is trying to figure out if they should indicate 7 or 8. Conversely, sometimes Yes/No or something equally simple can be treated as effortful if it demands that a person ignore nuance or shades of grey that they know exist. Again, what resolves your choice is the question of how hard it is to know how to respond. Questions and scales that are effortful to answer can make a one-pager seem interminable, and questions where finding your response is easy can make a 30-pager seem to pass in an eyeblink.

Reply