,

How to Use Employee Opinion Surveys to Predict Workforce Outcomes

featuredblog-01

The Federal Employee Viewpoint Survey (FEVS) is a powerful resource to evaluate federal employees’ opinions on topics that directly affect agency performance. Yet FEVS data is not limited to understanding employee sentiment alone. HR professionals can also use employee opinions to predict real employment outcomes, and directly compare survey responses with HR data to verify how perceptions manifest in reality.

Such a direct comparison has the potential to provide several important results: Validating employee satisfaction or concerns where FEVS responses are supported by HR data, and identifying areas where increased publicity of workplace opportunities would be helpful, in cases where employee opinions are contrary to observable HR data. FEVS responses can also help agencies predict and proactively address actual outcomes that otherwise may not have been known. For example, a survey analysis indicating that women disproportionately perceive a shortage of promotional opportunities may prompt an agency to evaluate the actual number of promotions granted to women compared to men.

Many federal agencies now have access to the anonymized individual-level responses of the 2014 FEVS. Using this raw data, agencies can evaluate significant disparities between different demographic groups of employees, including various diversity and EEO demographics, as well as position-specific demographics including supervisory status and location (HQ or field), and education level. Some agencies also have access to sub-agency response data and even field-office specific responses, providing an additional layer of potential comparisons.

The sheer volume of data included in individual-level results can make it challenging to start making meaningful comparisons. With 84 questions and at least 11 demographic groups to compare, combining some of the questions into a smaller number of like categories can be helpful, such as grouping questions related to work/life balance, compensation, promotional opportunities, etc.

From there, data visualization software can visually contrast how different groups responded to categories of questions. One helpful dashboard could use a sliding color scale to depict the percentage of responses in each group that were positive, providing a rapid indication of groups that differed.

To measure the extent to which those differences are meaningful and further focus the analysis, a non-parametric statistical test, such as a Chi-squared test or Fisher’s exact test, can be used to evaluate if differences in results between groups are statistically significant. For example, the number of positive and negative responses for males and females can be placed into a table, and compared to the number of positives and negatives one would expect if gender was uncorrelated with the response for a specific question or category of question.

Example contingency table comparing gender and response to diversity question on FEVS (using hypothetical numbers):

FEVS Question 34: “Policies and programs promote diversity in the workplace (for example, recruiting minorities and women, training in awareness of diversity issues, mentoring)” A rating of 5 indicates most favorable, while 1 indicates least favorable.

1 2 3 4 5 Total Responses
Male Employees 100 (104 fewer than expected) 150 (106 fewer than expected) 200 (4 fewer than expected) 300 (70 more than expected) 400 (144 more than expected) 1150
Female Employees 300 (104 more than expected) 350 (106 more than expected) 200 (4 more than expected) 150 (70 fewer than expected) 100 (144 fewer than expected) 1100
Total Responses 400 500 400 450 500 2250

Expected values for a cell are obtained by multiplying the column total by the row total, and dividing by the total number of responses. A Chi-squared test produces a p-value below .01, indicating a strong link between gender and response. More positive responses are observed from men, and more negative responses are observed from women, than would be expected given the number of men and women in the sample population.

Finally, significant disparities in responses between groups should trigger an analysis of the agency’s actual HR data to determine if perceptions can be verified. Often times, the FEVS data can preemptively identify potentially problematic disparate impact EEO issues, and prompt the agency to address them. Conversely, a FEVS analysis can point out strengths, such as revealing that new employees are less likely to leave the agency when compared to the previous years’ results.

When verifying if these perceptions can be observed in the actual HR data, agencies have a powerful tool that can predict and confirm real outcomes that directly affect mission readiness.

Matthew Albucher is part of the GovLoop Featured Blogger program, where we feature blog posts by government voices from all across the country (and world!). To see more Featured Blogger posts, click here.

 

Leave a Comment

Leave a comment