270666

#148740

Mark Hammer
Participant

I run into this all the time. Certainly don’t take my recommendation as gospel, but here’s how I look at it. A great many surveys need to have non-opinion alternatives to questions. Respondents want them, and expect them; sometimes because they are not in a position to have an opinion (e.g., an employee asked about an organization they have only recently joined), sometimes because they can imagine a lot of mitigating factors that leave them in the grey zone. And they REALLY don’t like being forced to give an opinion they feel does not reflect their sentiments or understanding. Keep in mind that a basic premise of opinion surveys is that the question will have an element of ambiguity to it.

The non-opinion option could be “Don’t know”, “Not applicable”, “Neither disagree or agree”, “Neither satisfied or dissatisfied”, or any of a variety of others. In general, though, the more ways to weasel out of giving an opinion you provide people, the greater the likelihood they’ll take one. For some types of questions, you could see as much as a third or more of people opt out of giving an opinion. In some respects, that’s data too***, but that may not be how management sees it.

One of the things management will like to get from such a survey is a rolled-up %positive / %negative “score” on items. Having a middle option tends to get in the way of that.

There are a few compromises to consider. One is to leave the middle option as originally used, but tabulate results in terms of “% of those who gave an opinion”, such that management gets the results in the preferred format. So, if 40% gave a positive response, 30% gave a negative response, and 30% were sitting on the fence, of those who gave any sort of opinion at all, 57% (40/(40+30)) were positive.

One of the perks of doing it that way is that you can report only the one number, without having to be concerned about misrepresentation. If there are three numbers (40% pos, 30% neg, 30% neutral), you just know that somebody is going to presume that if only 40% were positive that must mean that 60% were negative. Instead of having to shove the % neutral in their face, just consistently report the data in terms of opinion-givers, and the one number will suffice, since the complementary result can be accurately derived. That also permits you to reasonably compare results from the the 5-pt and 4-pt scales. It’s not perfect, but it’s not that far from perfect. I’ve done it with very large datasets, and you’d be pleasantly surprised how little it messes up your data.

Another option is to use visual presentation as your support. When respondents see the scale, opinion responses are clustered together, and any non-opinion alternatives are visually set to the side. This makes it clearer to the respondent that if you really don’t have an opinion about this item, but you still want to show us and/or yourself that you’ve read and answered the question, put your check-mark here. The exec gets their way, by having a 4-pt scale that tacitly declares THESE response choices as more valid than THOSE, and the respondent doesn’t feel boxed in.

The caveat is that neutral options can mean different things to different people, depending on their number, location, and phrasing. If the only non-opinion option is “neither agree nor disagree”, people who check that off will include both those who think the question is not relevant to them, people who don’t feel they have enough information to make an informed choice, and people who can think of reasons to be positive and reasons to be negative, but can’t make up their mind. A “neither agree nor disagree” in the middle of the scale, paired with a “Don’t know / N.A.” off to the side, will be interpreted by respondents as used ONLY to indicate equivocal sentiments, with DK/NA to be used for everything else.

Make sense?

(*** On a large scale employee survey I worked on a decade ago, that asked some questions about fairness of hiring competitons that the employee had participated in during the previous 3 years, the proportion who indicated “Not applicable” became a useful oblique indicator of what employee groups were more, and less, actively pursuing other jobs)