By the time you read this blog post if you are a federal government employee, you have probably already received an email from your agency asking you to participate in the annual job satisfaction and commitment survey to nowhere known as the Federal Employee Viewpoint Survey (FEVS). Don’t participate this year. Why? This survey does not work. It is the definition of insanity in the words of Albert Einstein. “Doing the same thing over and over again and expecting different results.”
Ask yourself this simple question. If the FEVS works, why are commitment and satisfaction levels among federal government workers in 2014, at their lowest levels since the inception of the survey in 2003? Is it a coincidence that citizen satisfaction with the federal government services for 2014 is at its lowest level since 2007 or that customers ranked Wall Street and insurance companies higher in customer satisfaction than the federal government.
Here is the familiar refrain you probably hear from your agency.
It is that time of the year again.
• Let’s do an engagement survey.
• We need hard data.
• Let’s share the data with employees; they will know what to do.
• After all, they are professionals, they can handle it.
• Our scores were bad but at least we were pretty close to similar agencies.
• What is in it for me?
• What will be the repercussions?
• Not again, another engagement survey?
• Why should I cooperate this time?
• What happened to the last survey?
• Why didn’t anything change from last year?
Here are three things I hear from frontline managers when they complain about the FEVS.
• They feel disconnected from the entire process.
• They feel inordinate pressure to improve the individual commitment and satisfaction scores of their direct reports while broader organizational concerns go unaddressed.
• They feel like the survey is an add-on activity to an ever expanding list of things they are required to do.
Do you see a theme with above comments? The FEVS has such an accusatory tone about itself that managers and employees feel like they are the problem when in fact, the organization is the problem. The entire process has a weighted feel to it of trying to control people by getting them straightened out. As opposed to a more transformational approach that allows employees to find their own paths by discovering purpose in their work.
The fact that the state of government is still stuck in the 20th Century does not help the FEVS. According to consultants, David Bradford and Allen Cohen, the federal government is too hierarchical, rigid and interested in the status quo. They recommend it move into the 21st Century where organizations are flatter, transparent and innovative.
In the meantime, another year will pass and the Office of Personnel Management and their beltway partners in crime, the Partnership for Public Service will recommend everyone emulate the National Aeronautics and Space Administration (NASA) as the best place to work in the federal government as they have for the last 3 years. Ignoring the fact that NASA has one of the whitest and male dominated workforces among all federal agencies.
So farewell my friend FEVS, it was nice to know you. You titillated us and brought us to the edge of our seats in great expectation that you could change our fragile lives as federal government employees. Yet in the end, you were like so many other conversations about improving the commitment and satisfaction of federal employees. Plenty of people talk about it but no one can seem to do a damn thing about it.
If an employer has constant turnover, like say in a fast food outlet, then it can make sense to engage in regular surveys of employees. What drives the timing of surveys, and a need for more recent information, is anticipated discontinuity of staff.
But when an employer has fairly stable staff, with maybe some limited internal churn, but minimal voluntary departure, apart from maybe 3-4% retirement rates, then the timing ought to be driven by how long it takes to address whatever is highlighted by such a survey. If it is likely to take a year and half to make sense of a result and implement an intervention, then what is the point of gathering more information right away?
Moreover, as I’m fond of reminding folks, data is like soup. As with soup, it is not enough to simply boil it until the contents are gummable. You need to let it simmer a while until it acquires a coherent flavour. Similarly, data requires time to analyze and begin to see the patterns. I guess in that sense, it’s like those old “magic eye” 3D pictures, where you would stare and stare until finally a hidden 3D image would emerge. So one needs to allow time for insights to emerge from such survey data. Simply having a lengthy list of how-are-we-doing percentages does NOT foster useful or actionable insight.
The Canadian equivalent of the FEVS is conducted every 3 years. Is this the “perfect” spacing? I don’t know, and won’t make any claims. But I do think that an annual FEVS is simply too frequent, and may serve to undercut the usefulness and value of the exercise.
Case in point: We ask a number of questions in ours about harassment in the workplace. The questions are not perfect, nor sufficient to do a dissertation with, but we did do something that I think was smart. We reasoned that, from the first opportunity people had to complete the survey, until the point where the tabulated results came back to agencies, it would be about 5 months. It would then take about another 3 months for the agency folks in HR to digest the agency-specific results and discern where the issues were. Add another 3 months onto that for senior management to get, vet, and launch any proposed initiatives to address harassment numbers. So, from the point where people might say they were experiencing harassment of some type from some source, to the point where the organization had taken concrete action in response, it would likely be a little under a year. We rounded that up, to be on the safe side, and asked employees about any harassment they had experienced *during the previous two years*. In other words, we gathered the data in a manner that could clearly distinguish between “before” and “after”, in the hopes that, if we were doing the right things in response, we would be able to see change and *know* that it was change resulting from the intervention.
Assuming that an agency DOES find reason to be concerned about some area, and does the responsible thing and takes appropriate action, how does resurveying every year permit clearly distinguishing “before” and “after”? And if you can’t accurately discern before from after, how can you tell if your interventions had the desired effect? It’s a bit like serving a child a scalding hot bowl of soup and asking them every 30 seconds “So, how do you like the soup?”. You have to wait for it to cool down before the question has any validity or utility.
So, as much value as I think the exercise has, in principle, conducting it annually strikes me as wasted effort and cost. A longer interval between surveys would permit greater value to be extracted. Is a boycott of this year’s FEVS the appropriate response? I don’t know. Doubtless, like any sort of boycott, it would be incomplete, and there would be enough data gathered to lead decision-makers into thinking they had enough data to act on, even if the data were misleading. My own gut sense, which you can cordially disagree with, is to get the best data you can at each occasion, and action on the side for a different timing. Unfortunately, since such exercises tend to be in support of accountability, performance pay for management, and things like that, the momentum tends to be in favour – mistakenly, I think – of an annual exercise.