Neutral Options on Perception Surveys

Home Forums Budgeting Neutral Options on Perception Surveys

This topic contains 13 replies, has 7 voices, and was last updated by  Mark Hammer 7 years, 9 months ago.

  • Author
    Posts
  • #148714

    Benjamin Rebach
    Participant

    We’re running perception surveys for a Federal client. For the past two years, we have used a 1-5 point Likert scale. An executive has expressed a preference for continuing the surveys with no neutral option.

    I would assume we would handle this by removing the neutral option, and rating the other options 1-2 for negative side, and 4-5 for positive side, so as to maintain the 5-point score system. This would allow us to continue with the 5-point scale goals, although the change in survey methodology would need to be noted when comparing to early scores.

    Please note this an assumption on my part; we are now in the early stages of negotiating this request. I am leaning towards advising against adjusting the scoring system, but I would like some feedback so as to give better informed advice. My primary concerns:

    • Data incompatibility after the scoring change
    • Considering a neutral perception as ‘invalid’
      (This seems to be an example of the client wanting results that best suit the client needs, but are not reflective of the actual business environment.)

    Any thoughts on my situation or the use ‘neutral’ scoring on surveys would be appreciated.

  • #148740

    Mark Hammer
    Participant

    I run into this all the time. Certainly don’t take my recommendation as gospel, but here’s how I look at it. A great many surveys need to have non-opinion alternatives to questions. Respondents want them, and expect them; sometimes because they are not in a position to have an opinion (e.g., an employee asked about an organization they have only recently joined), sometimes because they can imagine a lot of mitigating factors that leave them in the grey zone. And they REALLY don’t like being forced to give an opinion they feel does not reflect their sentiments or understanding. Keep in mind that a basic premise of opinion surveys is that the question will have an element of ambiguity to it.

    The non-opinion option could be “Don’t know”, “Not applicable”, “Neither disagree or agree”, “Neither satisfied or dissatisfied”, or any of a variety of others. In general, though, the more ways to weasel out of giving an opinion you provide people, the greater the likelihood they’ll take one. For some types of questions, you could see as much as a third or more of people opt out of giving an opinion. In some respects, that’s data too***, but that may not be how management sees it.

    One of the things management will like to get from such a survey is a rolled-up %positive / %negative “score” on items. Having a middle option tends to get in the way of that.

    There are a few compromises to consider. One is to leave the middle option as originally used, but tabulate results in terms of “% of those who gave an opinion”, such that management gets the results in the preferred format. So, if 40% gave a positive response, 30% gave a negative response, and 30% were sitting on the fence, of those who gave any sort of opinion at all, 57% (40/(40+30)) were positive.

    One of the perks of doing it that way is that you can report only the one number, without having to be concerned about misrepresentation. If there are three numbers (40% pos, 30% neg, 30% neutral), you just know that somebody is going to presume that if only 40% were positive that must mean that 60% were negative. Instead of having to shove the % neutral in their face, just consistently report the data in terms of opinion-givers, and the one number will suffice, since the complementary result can be accurately derived. That also permits you to reasonably compare results from the the 5-pt and 4-pt scales. It’s not perfect, but it’s not that far from perfect. I’ve done it with very large datasets, and you’d be pleasantly surprised how little it messes up your data.

    Another option is to use visual presentation as your support. When respondents see the scale, opinion responses are clustered together, and any non-opinion alternatives are visually set to the side. This makes it clearer to the respondent that if you really don’t have an opinion about this item, but you still want to show us and/or yourself that you’ve read and answered the question, put your check-mark here. The exec gets their way, by having a 4-pt scale that tacitly declares THESE response choices as more valid than THOSE, and the respondent doesn’t feel boxed in.

    The caveat is that neutral options can mean different things to different people, depending on their number, location, and phrasing. If the only non-opinion option is “neither agree nor disagree”, people who check that off will include both those who think the question is not relevant to them, people who don’t feel they have enough information to make an informed choice, and people who can think of reasons to be positive and reasons to be negative, but can’t make up their mind. A “neither agree nor disagree” in the middle of the scale, paired with a “Don’t know / N.A.” off to the side, will be interpreted by respondents as used ONLY to indicate equivocal sentiments, with DK/NA to be used for everything else.

    Make sense?

    (*** On a large scale employee survey I worked on a decade ago, that asked some questions about fairness of hiring competitons that the employee had participated in during the previous 3 years, the proportion who indicated “Not applicable” became a useful oblique indicator of what employee groups were more, and less, actively pursuing other jobs)

  • #148738

    Benjamin Rebach
    Participant

    Thanks for the detailed answer. I’m not sure if we have a valid need for more than one neutral option in a group separated from the bias options, but I can definitely see how this would be useful in other survey scenarios. I will keep it in mind.

    I really like the “% of those who gave an opinion” alternative – that may be the best compromise option we have thus far. It allows us to preserve data compatibility with the previous surveys, while still allowing us to present the executive with a view of the data which suits his needs/desires.

  • #148736

    Mark Hammer
    Participant

    About 11 years ago, I collaborated with a colleague at the Merit Systems Protection Board on a comparison of Canadian and American federal employee survey results. We (the Canadian half) had used a 4-pt/no-middle scale, and the American surveys had used a 5-pt scale with a neutral middle option. We used multiple blind raters to flag survey items that were interchangeable between surveys (unanimity was required for an item to be considered the same across surveys). When the American survey data was converted to %-of-those-with-an-opinion, to be roughly equivalent to our 4-pt scale format, it was pretty remarkable just how close the two federal public services were in their responses; often within 2-3 percentage points of each other for the same items, and sometimes identical.

  • #148734

    John C. Hinkle
    Participant

    Mark Hammer gave a good answer. I’ve floated around my Agency a bit in recent years, and have been amazed at the poor understanding of surveys (e.g., 10 point scales; rating requests on “are we providing better service than you ever expected”). Some of these surveys have not even been home grown, but were provided by consultants who claimed customer surveys provision as one of their services. We as government customers don’t know what we don’t know. My suggestion is to first attempt to educate your customer on your concerns, and the rationale for using odd # scales. Anticipate his questions and concerns. Armed with some of Mark’s ideas on ways of presenting the results, a reasonable “executive” should accept and understand your recommendations, and appreciate the education.

  • #148732

    David J. Levin
    Participant

    A “Neutral” position’s utility on a likert scale is contextually based.

    1) The more divisive the issue, the more respondents will desire it as an option to avoid commiting to a side.

    2) The more well-known the issue, the less need for a respondnet’s reliance on neutral as a safe response.

    In the case of government employee/government customer satisfaction surveys, 4 and 5 point scales will be largely equivalent when you are dealing with low controversy, well known topics. As the topics range away from this, the differential in results between a 4 and a 5 point scale will increase.

    So to take a hard line methodologically may not be necessary, if you know the two will be functionally equivalent for your topic.

  • #148730

    Mark Hammer
    Participant

    Thanks, John.

    It took me years to “reconfigure” (“set aside” is perhaps too harsh) much of what I had learned in grad school about scales and statistics, and focus on providing surveys that: a) first and foremost had respectable (though maybe not sublime) measurement properties, b) made it easy for respondents to “find their answer” (i.e., minimize mental effort for them), and c) yielded quantitative results that were easy for management to think with.

    There is a sort of ethical and pragmatic obligation for A and B to always trump C. You can’t make effective decisions on the basis of data with poor measurement properties, and you can’t get decent survey data if people are unwilling to answer or confused. But, that aside, there is a big difference between the way one approaches survey data for the purposes of academia, and how one approaches it for facilitating policy or budgetary decision-making. Sometimes, you just have to suppress your instincts and ask yourself “How is management going to decide on the basis of this information? How can I assist them in not wasting time and energy, and in avoiding bad decisions?”.

  • #148728

    John C. Hinkle
    Participant

    Good points/questions. When my wife was working on her PhD in Communications (after decades in the “real” world), I got a lot of insight into what you say. I found it really interesting how what started out as a 20 question survey that appeared really to be asking all the right questions, could be whittled down to ~7 questions that reliably ferreted out statistically significant findings once they had been trial run on a test group. The rest of the questions were unnecessary. Of course, when you do this (get the survey down to the minimum necessary questions) you have to throw out surveys where the respondent didn’t answer all the questions (which will more often occur with a 4 point scale because the respondent doesn’t like any of the choices) because a complete picture of the respondent is not then sampled.

  • #148726

    Steve Ressler
    Keymaster

    Great answer David. Thanks!

    I agree on the divisiveness issue – when it’s controversial and I don’t have a strong opinion, I’d love to go neutral. The rest of the time (like a course evaluation) I’m just doing it really quickly and I usually pick in the middle or one positive north of the middle just cause its solid and I hate filling out surveys

  • #148724

    Bill Bailey
    Participant

    I agree with your premis. Neutral responses are lazy – just plain lazy. There is no real data except that the person giving the answer is somehow disenfranchised. So on second thought, maybe the data is useful after all – just not specifically for the questions asked.

  • #148722

    Mark Hammer
    Participant

    I’ve had the benefit of doing survey work with seniors, and I can tell you that there are plenty of questions where, the longer you’ve lived and the more you’ve been around the block, the easier it is to generate pros and cons about something, rather than just pros or cons. Them folks are “grey” for a reason. So even when the material is not implicitly contentious or divisive (to use David’s descriptor), there is a strong desire on the part of the respondent to provide an equivocal “opinion”, and respondents simply resent that you haven’t given them that opportunity.

    But let us make an important distinction between knowing something about respondents, and managerial thinking. If we are expressly interested in the respondents themselves, then neutral responses, and responses just one side or the other of the midpoint on a 9-pt Likert scale are of interest to us. Management, on the other hand, wants data that allows them to make a decision about what I should do now (or start planning to do) about this. Which means they want survey results that help them flag problems, and identify what doesn’t demand attention at the moment. And to serve that particular master, there have to be clearcut happy/unhappy negative/positive data arising, which is why Ben is getting a “nudge from upstairs” to eliminate the neutral option. So the purpose of the survey data directs one towards a given approach to scale selection. Most of our formal training is really with respect to a different purpose (academic inquiry) so we never receive training in how to serve the other purposes.

    Are neutral responses “lazy”? Well, they can be, in the same way that “Other, please specify” at the end of a list is usually a license for respondents not to read the rest of the list (leaving the investigator with the burden of recoding all those “others” into valid responses on the existing list). But it CAN be the case that a respondent really doesn’t have anything to say about a given area. That’s reality, and as a survey designer you’d like to know that the respondent has seen the question and not inadvertently skipped over it.

    Case in point. When our younger son was in kindergarten, we get a request from the Dept. of Education to complete a parent survey. The survey starts by asking what grade the child is in (we had a kid in grade 9 at the same time, but I elected to answer on behalf of the younger one). It then goes on to ask me about how satisfied I was with the curriculum my child was receiving. “Wait, they have curriculum in kindergarten?”, I thought, “I didn’t know that.”. The survey insisted on an opinion, did not provide a don’t know or neutral option, and required that I answer that item before proceeding to the next one. Rather than lie, and unable to answer any other questions, I bailed on the survey. You probably want to avoid that sort of scenario.

  • #148720

    Benjamin Rebach
    Participant

    I’d like to thank everyone for responding. I took my question here and to twitter, and I received much better response here. My thanks to everyone.

    The discussion of neutral opinions has been enlightening, but the issue of data incompatibility if we were to switch response options still seems to outweigh the potential benefits.

    One thing I am trying to keep in mind when considering next steps is the idea of ‘sunk costs,’ from my PMP training. Basically, if using neutral answers does not meet the client’s needs, then the two years of data we have thus far should be considered a sunk cost, and should not be considered when planning the way forward. But, if the client’s needs – truly understanding the full range of perception in the surveyed population – don’t match with the client’s desires – having less ambiguous data to make decisions based on – then the past survey data is still very valuable.

    In our situation, I believe the client needs to see the unfiltered data as well as the ‘neutral excluded’ view. I believe potential ambivalence and disenfranchisement need to be considered. Removing neutral options seems to be very valid for certain purposes, but I think our best course of action is:

    • Continue the survey as is, but provide neutral-excluded calculations in addition to current calculations. This will provide the client with another view of the data without sacrificing our existing work, and we can use the existing data to provide neutral-excluded calculations from the past two years.
    • Discuss with the client the validity of neutral opinions, possible conclusions which can be drawn from ambivalent opinions, and possible routes for further exploration.

    Of course, the decision is not mine alone. I am simply making a recommendation. My thanks again to everyone who helped me form this recommendation.

  • #148718

    John C. Hinkle
    Participant

    But you have to let us know how it went with the client!!! 🙂

  • #148716

    Thomas K. Perri
    Participant

    With my limited experience this is my take on you quandary. I can understand why your Executive wants to change to a no neutral scoring scale, I believe the general consensus is that people tend to default to the number 3 too much of the time, at least that is the general feeling in my office when we talk about executing survey’s.

    So what to do about the behavior of scoring answers as neutral or 3’s, does it need to be addressed in the cultural environment that the survey is being used in?

    Since we are talking about the behaviors of the persons that are taking your survey’s, you have some assumptions that you have agreed to, I feel that your Executive is asking you to agree to a new assumption, that the neutral perception is invalid.

    I would ask the question of your Executive:

    • Why do you want to remove the neutral response from your survey’s?
    • Does he or her see that there is a discontinuity between the results of the survey and the “feel” of the cultural/ business environment?
    • Where did this idea of removing the neutral response come from?
    • Is there any validity to the feeling that the actual business enivronment is not being reflected truly by your survey’s results?

    (The same 2 questions asked in differing ways)

    Some other useful things to consider could be:

    • Are the questions formulated or crafted well enough to elicit other than neutral responses?
    • Did this Executive have a hand in the creation of the survey or survey’s now in use?

    Or should you just accept the data that you are currently collecting, with the agreement that when talking about perceptions of your company in the minds of your customers, if someone in the company messed up and your customer has a experience that is not good you should or will hear about it (even more so with anonymous results). And positive perceptions are usually about “the people” that customers interact with and not usually about the company itself. Apply some of the ideas in the great replies you have gotten and apply a bit of common sense, not forgetting why the survey is being used in the first place…Keeping the customer in mind and how you can best serve them.

You must be logged in to reply to this topic.