, ,

Analyze the Small Data of Training

98-featuredblog01

A solid training evaluation program collects the right small data to assure your training program is on the right path to providing business results.

Analytics over Miami

Last week I tried to escape the cold of the Washington, D.C., area by attending an analytics conference in Miami Beach, Florida. The conference focused on data – big data and small data (more on that later) – and ways to draw meaning out of lots of numbers. Yet as I prepared for the trip, I ignored a very basic data set right at my fingertips: the Weather Channel. I was happy envisioning a warm and sunny Miami Beach with temperatures in the very pleasant 70’s, and I packed and dressed accordingly. Instead, the temperature hovered in the high 50’s. That’s warm for D.C. in February, but not for somebody suited up for beach weather. I was actually really cold as I walked the streets of Miami Beach and sat in the overly air-conditioned convention center. So don’t feel too jealous.

The Small Data of Training

Last week’s conference was sponsored by MicroStrategy, a leader in data analytics and visualization. Aside from a few hours one frigid evening spent on the luxurious yacht Usher (which received a cameo in the movie Entourage), the week was all about how to collect, analyze, make sense of, visualize, and report on data to answer business problems. My company has integrated MicroStrategy into our evaluation toolset, so I was interested to see how tools like this were used across other industries. Not only are major industry players around the world using analytics and business intelligence engines like MicroStrategy (Facebook, Netflix, Zurich, Coach, Adobe, State Farm, eBay, Kelly Blue Book, and others), but I was pleased to see several Federal agencies touting their use of the MicroStrategy platform to help make sense of their data (e.g., TSA, USPS, and the State Department).

But are analytics tools like MicroStrategy overkill for us in the training world? We don’t all deal with the massive data sets typical of some big data programs (like those digging through historical weather data for evidence of global climate change or parsing through Facebook traffic for indicators of consumer buying preferences). In the training world, the information we handle about the actual training programs is not often very large. We tend to work with databases in the megabyte or maybe gigabyte scale, not the truly enormous terabyte-sized (or greater) volumes found in other analysis activities. (By the way, want to impress folks at the next cocktail party you attend? Ask them how long they think it would take to download a 1-yottabyte file from the internet. Answer: 11 trillion years!)

The “big data” of training is, in fact, small data. And there’s nothing wrong with that. Small data can be defined as data that “…connects people with timely, meaningful insights (derived from big data and/or ‘local’ sources), organized and packaged – often visually – to be accessible, understandable, and actionable for everyday tasks.” (“Defining Small Data,” smalldatagroup.com, 10/18/2013). In this article, former McKinsey consultant Allen Blonde put it this way: “Big data is about machines and small data is about people.” Training is ultimately about people! In the words of the training evaluation models we’ve previously discussed, we train people (Level 1); assess people’s learning (Level 2); help people apply what they learn (Level 3); examine whether and how people, systems, and other organizational factors work together to produce business impact (Level 4); and calculate Return on Investment, or ROI, on the training (Level 5).

Training’s data may be small, but we still need to analyze it, and tools like MicroStrategy and others can help us do that. Over the next few weeks, I’ll take a look at the small data we can use to assess our training programs at each level to ensure we’re on the right path toward success. Let’s start now with the lowest levels.

Level 0: Activity

Wait – Level 0? We never talked about that before! Jack Phillips of the ROI Institute describes Level 0 as “basic data such as how many people are involved in the course and the cost per person.” (See here for an interview with Jack Phillips on measuring and achieving ROI in eLearning.) One agency we worked with was proud of the Excel-based dashboards they had created regarding the effectiveness of their training programs. When we looked at their dashboards, which were actually static scorecards, we found they included a robust set of charts and statistics on the number of students trained, the quantity of courses and classes delivered, the training development costs, and the costs and earned value of training deliveries against budget. Impressive charts, to be sure, but reported in isolation they were leaving upper management without full confidence that the training was making its mark in the agency. It’s true we have to collect and report on this data – including the cost data if we are to calculate an ROI – but we must also link it to higher levels of training to solidify our case that training has an impact on business results.

Level 1: Reaction

We all know the Level 1 smiley sheet, or student feedback questionnaire. As with Level 0, if this is all we do, then there’s limited value to our training evaluation program and little ability to show impact. The temptation in the Level 1 survey is to focus exclusively on getting feedback on the actual training just delivered. But without making the survey too onerous, a few questions can be integrated to get a sense of how well the just-completed training course contributes to the bigger organizational picture. For example, a question or two can be asked around strategic areas such as:

  • How the training was perceived to help alignment to agency mission;
  • How well the training program addressed agency areas of concern in the Federal Employee Viewpoint Survey (FEVS). There are several questions in the FEVS survey that speak directly to training (e.g., Employee Development questions like “I am satisfied with training received for present job” and “I am given opportunity to improve my skills”). Agencies concerned about their employee development scores can interject a question or two at the end of the enterprise-wide Level 1 survey to get a pulse check – and possibly a predictor of future FEVS results – on this area of strategic concern;
  • How much the trainee and his/her supervisor discussed the purpose of attendance in the training program before the it occurred (a predictor of transfer of knowledge to the workplace);
  • What the student’s expectation is for future application or impact (a data point that potentially can be correlated to actual application or impact later on).

There is recognition throughout the training industry that we have overemphasized focus on Level 1 data. A good evaluation plan, however, will gather enough information at Level 1 to assist in telling the overall story of program impact, as well as provide the ability to “debug” issues that occur with training during content development and delivery. But the evaluation plan should not stop there.

Future small data dives

In future blogs, we’ll dive deeper into the types of information we can gather related to Level 2 – 5 and how to analyze them for insights into the performance of our training programs. We’ll then talk about reporting our findings. We are starting to build a “chain of evidence” (Kirkpatrick lingo), or “chain of impact” (Phillips ROI lingo), that shows a link between the training and business results.

In the meantime, it might be good to get further educated on what “data analytics” really means. It’s not just a matter of collecting data, but performing the related analysis to understand what the data can teach our organization. To that end, you might consider attending the free GovLoop training entitled “How to Understand Government Data Analytics” coming up on Tuesday, March 8. See here for registration information.

Until next week – be sure you keep a look out for the small data available for analysis in your training programs (and remember to check the Weather Channel before you pack for your trip).

Eugene de Ribeaux is part of the GovLoop Featured Blogger program, where we feature blog posts by government voices from all across the country (and world!). To see more Featured Blogger posts, click here.

Leave a Comment

3 Comments

Leave a Reply

Eugene de Ribeaux

Thanks Katarina. Regardless of where we sit in an organization, even in training, there’s more and more pressure these days to show concrete, data-based evidence of alignment to organizational goals. Let’s at least have some fun talking about a process for collecting those numbers!