Nationwide Federal Employee Viewpoint Survey

Home Forums Miscellaneous Nationwide Federal Employee Viewpoint Survey

This topic contains 3 replies, has 2 voices, and was last updated by  Mark Hammer 8 years, 8 months ago.

  • Author
  • #106617

    Henry Brown

    Tremendous amount of Useful data which will require a lot of digesting!

    The Federal Employee Viewpoint Survey (FedView survey) is a tool that measures employees’ perceptions of whether, and to what extent, conditions characterizing successful organizations are present in their agencies. Survey results provide valuable insight into the challenges agency leaders face in ensuring the Federal Government has an effective civilian workforce and how well they are responding.

    Sections included
    # Current Survey
    # What is the FedView Survey?
    # Getting Started
    # Analyzing the Data
    # Using Results
    # Agency Rankings
    # Results
    # Published Reports
    # FAQs

    The section Current Survey :

    President Obama has made it clear: the Federal government needs to deliver results for the taxpayers. Our civil servants are the people who deliver those results, and we at the U.S. Office of Personnel Management (OPM) are doing everything we can to make them the best, most productive workers in the world.

    The Federal Employee Viewpoint Survey (formerly the Federal Human Capital Survey) has been updated to gather more useful data that will help us improve our workplaces and increase productivity. The survey, now done annually, helps us listen to our employees and focus on employee perceptions that drive job satisfaction, commitment, engagement, and ultimately contribute to the accomplishment of agency missions.

    Publishing the results of the survey is only the beginning. The next step is to develop a “Blueprint for Results.” In a new initiative, OPM will provide customized support to help agencies use their survey data to drive organizational change.

    Over a quarter-million government employees responded to the survey this year, and there is much good news to report. Employees are more confident in their leaders, and have increased respect for their honesty and integrity. They are proud to work for the Government and feel an increased sense of personal accomplishment. The vast majority believe their agency is accomplishing its missions and would recommend it as a good place to work. We hope to build on these results to increase employee engagement in improving agency operations.

    Identifying and exposing problem areas, while at times uncomfortable, is essential to improving government operations. Performance management, including the management of poor performers, and the promotion process are areas of concern. We’ve added a new section on Work/Life to better understand the impact of these programs. They should be given extra consideration; significant room for improvement is possible.

    In keeping with the Obama administration’s commitment to Open Government, transparency, and accountability, we will make the survey data more readily available to the public. OPM’s goal is to make the Federal government the model employer for the 21st century.

    On behalf of President Obama, I’d like to extend our gratitude to all employees who participated, and our deepest appreciation for the work our people do each day for the American people.


    John Berry

  • #106623

    Mark Hammer

    Is this the Federal Human Capital Survey renamed, or is it a wholly different instrument? I’m confused.

    A few words of advice to those who may end up using this data. I base this on the Canadian experience with a similar (and similarly-intended) omnibus HRM survey we have used since 1999, and which I have been involved with intimately since then. Our PS is not quite as large asyours, but we did collect data from around 100k employees in over 70 occupational groups spread across 78 organizations in the frst 3 cycles, and over 160k in the most recent one.:

    1) Pay special attention to WHO is responding, and who is telling you their story. Our experience has been that, as the proportion of recent hires (and share of those answering the survey) goes up and down, so do the results. Those with less tenure tend to respond more positively on almost everything, and if a greater (or lesser) share of those responding this time around, or in agency X vs everywhere else, are younger recent hires, you can expect to see differences in survey results. Some things that may seem like “evidence of progress” are nothing of the sort, and merely a reflection of changing internal composition of your agency. At the same time, if your hiring has gone through a lull in recent years, what may initially seem like cause for alarm may simply be exactly what you’d expect with that shift in average tenure.

    Similarly, agencies vary in the typical sorts of jobs and working conditions. Some jobs are the sort where innovation is expected, while others are the sort where it is appropriate for innovation and divergence from protocol to be discouraged (e.g., border guards). Some are more likely to be removed from HQ, where all sorts of career and development opportunities abound, so you can expect them to be a little more negative than those agencies stting squarely in DC, where the federal jobs fall like apples off the tree. Pay special attention to your organization’s geographic distribution (e.g., % of workforce in major urban centres and at HQ), and job-category composition.

    2) People get distracted by percentages, and percentage differences, and forget how many “heads” a percentage point respresents. When I consult with individual organizations, or managers of units/directorates, I ask them “How many people would it require for you to think that something requires a policy change or initiative as the appropriate response, as opposed to simply meeting with folks, one on one?” Whatever number they give me, I translate that into a percentage for them. So, if they have survey data for 160 people (out of a 250-person organization), and they give me 20 people as the critical deciding point, then I set the threshold for treating a percentage difference as meaningful (either positive OR negative) at 12.5% (20/160). So, if the organization overall, or PS-wide average is 68% positive, and your unit is 63% positive, ignore it. Management is more likely to take action when the listen of things to grapple with is small and manageable, so they appreciate the focus. And employees feel validated when management has zeroed in on the things that are clearly important to them (the employees that is).

    3) Consider what a given question or cluster of questions means for your organization. You’d be surprised how differently the same questions can be interpreted across different contexts and mandates. OPM has provided some useful clusters and themes to start with, but those groupings don’t come from Mt. Zion, and the boundaries between them are very soft and fuzzy. They may very well be groupable in other ways and tell a more valid story about your unit/organization when so treated.

    4) If it is technically possible, get the results for whomever yo wish to compare yourself to,whether that be everyone else, or everyone else in your organization, MINUS your results, so that your numbers are not included in the “average”.

    There’s more, but I have to catch a bus. If I think of anything else tomorrow, I’ll add it.

  • #106621

    Henry Brown

    Yes it is the renamed and revamped Federal Human Capital Survey.

    Yes you bring very valid points to the table.

    Can’t speak to a whole lot of organizations but if the agencies are doing this “correctly” then very little emphasis is placed even on the entire organizational responses. Much more emphasis is placed at the lowest level possible, and in the past there hasn’t been a lot of attention paid to those area’s that are perceived as being “OK” but more on areas that can stand improvement. And those areas are going to be subject to at least some “political” pressure, If “management’s agenda is to implement some specific agenda, suspect that there will be a way to bias the attention paid to a specific section.

  • #106619

    Mark Hammer

    It’s often that micro-level analysis where the greatest risks lie, since such managers will often not be in possession of the data or technical expertise to help them make appropriate sense of their data.

    People can be worried about results that are completely explicable by “normal” things.

    Case in point. In 1999, our federal survey included our equivalent of IRS (who made up 25% of the overall sample at that time). In 2002, for several legal reasons (moving from being governed under one statute to being governed under another) they did not get included in the survey. The aggregate results for 1999 and 2002 showed a decline in the overall proportion who said they could get their work done during “normal working hours”, prompting concerns about a work-life balance issue amongst senior management. We re-examined the 1999 data, factoring out our revenue agency, and the 1999-2002 differences magically disappeared. Why? Because the sort of work done by folks in that agency is frequently of the sort where you just continue where you left off the next morning, and not the sort of projects-with-deadlines that are more common in other agencies and require extra time. Once you took those folks out of the 1999 data, the proportion of those remaining who DO have “projects-with-deadlines” jobs went up, and the between-cycle differences evaporated.

    I can’t emphasize enough that interpretation of results from such surveys that leads to appropriate and effective managerial responses MUST factor in what the heck it is that people under them tend to do for a living, and how it might be different than other units. No statistical substitute for that.

You must be logged in to reply to this topic.