Federal Employee Viewpoint Survey

Home Forums Human Resources Federal Employee Viewpoint Survey

This topic contains 3 replies, has 2 voices, and was last updated by Profile photo of Mark Hammer Mark Hammer 1 year, 12 months ago.

  • Author
    Posts
  • #173671
    Profile photo of Henry Brown
    Henry Brown
    Participant

    Press Release from OPM.gov

    In the spring of 2012, OPM asked 1.6 million Federal employees to provide their perspective on the business of government, and to tell us about their experience – what they see working, and what needs to be fixed. Over 687,000 answered the call, more than twice as many as any previous survey.

    For the first time since it began as the Federal Human Capital Survey, the Federal Employee Viewpoint Survey attempted to reach every full- or part-time, permanent, civilian government employee, with very few exceptions. Such a large data collection presents the opportunity to get the views of employees, making this the most inclusive survey to date.

    At the broadest level, employees continue to believe their work is important and are willing to contribute extra effort to get the job done. At the government-wide level, telework opportunities show a clear positive impact, with clearly higher engagement and satisfaction scores among teleworkers at all pay levels. Telework-eligible employees also grew as a population, from one out of four to one out of three Federal employees.

    However, stresses on public servants – including continued tight budgets and pay freezes – are reflected in our Global Satisfaction indicator, even while more than two-thirds of employees recommend their organization as a good place to work.

    At the agency level, the greater volume of responses collected this year will enable a closer look at their results. For the first time, agencies can dive deeper into their data and create customized reports. The real value in the FEVS is how it is used by agencies to improve services for the American people. I encourage managers and leaders at every agency to use the greater granularity offered in this year’s report to identify and learn from successful groups within their agency.

    OPM continues to work to make employee viewpoint survey information more readily available. As always, many results are available at our survey website: http://www.fedview.opm.gov.

    On behalf of President Obama, I want to thank the many participating Federal employees for sharing their insights on the survey, and for their continued dedication and service to America.

    download the full Report: CAUTION 10.7 MB file

    or view Individually:

  • #173677
    Profile photo of Mark Hammer
    Mark Hammer
    Participant

    Thanks for the heads-up, Henry. I try to follow the FEVS (formerly FHCS), and similar, so I’ll be sure to go through the report first chance I get. Having worked on endeavours like that since 1999, I have to say that the real value of such exercises comes not from looking at the percentages, as undoubtedly many will, but from seeing what things go together, and looking at the various breakouts by demographic variables. Seeing that, say, 74% were positive on some item, is not nearly as informative as learning that 86% were positive on that item when another item was also positive, but only 53% were positive on item A when item B was negative. It is fairly basic analysis, but it’s remarkable how many folks never even get to that level.

    In any event, lots there to cogitate over.

  • #173675
    Profile photo of Henry Brown
    Henry Brown
    Participant

    Agree;

    Would also add that I look at the Trends. IMO a small change in the Beta is much more telling than the actual score. Was Extremely difficult during the first year that the Director of OPM dramatically changed the survey.

  • #173673
    Profile photo of Mark Hammer
    Mark Hammer
    Participant

    Trends with an annual survey of this nature are tricky. For a few reasons:

    1) Not much WILL actually change over the space of a year, particularly since it takes a while to get the results back, and for management to develop any sort of strategic response and implement it. Any suitable interventions will have had very little time to take effect before the next survey gets done. That time stagger between survey launch, data analysis, and strategic interventions often gets lost in the mix.

    2) The composition of the survey population can be changed significantly between cycles, yielding results that look like change, but actually reflect WHO is telling you this time, rather than a change in work circumstances of the same people producing a change in what they tell you. To whit. In 1999, the Canadian equivalent included Revenue Canada (our IRS) in the survey population, a large organization that amounted to about 25% of all survey respondents. Later that same year, they became what we term a “separate employer”, and so were no longer considered a part of the formal core public administration. In 2002, when we reran the survey, the percentage of federal employees who reported not being able to complete their work during regular working hours went up, along with some comparable items, and senior management was convinced there was a “work-life balance” problem. We reran the analysis of the 1999 data, minus the agency that had not been part of the 2002 survey, and lo and behold, the “problem” disappeared. Why? Because many of the people comprising that 25% of the 1999 survey were in jobs where you work your hours and simply pick up the next morning where you left off. That meant the remaining people who are in jobs where management comes back from a meeting at 4:30 and dumps a request on their desk that they desperately need done for another meeting at 9AM the next day, constituted a larger share of who remained in the survey. The conditions had not changed, only the relative representation of people in these conditions or those.

    Given the substantial jump in number of respondents this year, I’d be very leery of interpreting any observed changes in percentage this or that, until I knew more about who was contributing to the results this time out, compared to last time. That’s not to say there are no real changes, ever. But one needs to consider a wide array of other reasons for apparent change in aggregate results before taking the “change” ball and running with it.

    I will also add that recent hires are nearly always 4-6 points more positive in their responses to a great many questions, in comparison to those that have a couple years or more tenure. Hiring/recruitment cycles can also change the age demographic composition of the respondents, resulting in apparent changes. We saw such changes in our survey results when the proportion of respondents that were recent hires went from 12% to 19% and back down to 12%. The impact of such fluctuations across an entire public service can be measured, though not huge. But if a specific agency goes through a spate of recruitment, followed by a period of much less recruitment, you can expect the agency results to fluctuate accordingly.

    Recruitment spurts can also impact results by resulting in a lot of folks offering the tentative or non-opinion response, whether that is Neither Satisfied nor Dissatisfied, Don’t Know, Not Applicable, or something comparable. If they’re too new to know, they often bow out.

    Bowing out with a non-opinion response is also a function of distance from the top. When an organization gets big enough there IS no other response to a question like “How satisfied are you with the policies and practices of your senior leaders?” besides “How the heck should I know?”.

    3) You are absolutely correct that changes to a survey can have a big impact on results. Survey questions often predicate and qualify others down the line, in ways you wouldn’t normally think they do. So, inserting new items can have an impact, as can changing the order of the very same questions. For example, if there are skip patterns, one can end up with a different subset of people answering an item over cycles. When one engages in post-hoc analysis of results, it is easy to get too distracted by the results themselves, and forget to ask “Hmm, was this item preceded/set-up the same way this time as it was last time? Are the same sorts of folks lkely to be answering it?”.

    4) Finally, there are real “psychological” limits to answering certain kinds of questions that will place real limits on how good (or bad) the results can ever be. My favourite example was a question we asked for a number of years on how fairly people thought they were classified, relative to other public servants doing comparable work. People expect to be classified fairly. So, they can be severely disappointed in that regard, somewhat disappointed, or have their expectations met. Precious few will have their expectations exceeded, so the percentage positive to a question like that will never go all that high, and you’ll hit a ceiling somewhere in the mid-to-high 70’s, which can never be exceeded, because that’s the way people think.

You must be logged in to reply to this topic.