In last week’s post, Why Mandatory Training Rocks, I asked GovLoopers to weigh in on whether a few statements about learning are facts or myths.
Here are the results and some supporting evidence. Thanks to all who participated.
1. Participants have to like training to learn from it (Myth)
@ryanburdick wrote, this one’s “a weird place between true and false. I’ve noticed a lot of people go in with a negative perspective of “this course is going to stink” and then as it gets started, they get into it. I think there has to be some desire of “oh, I should learn” but learners don’t necessarily have to “like it” in order to learn from it.”
I agree, Ryan. While liking training can help improve student motivation (and ultimately improve learning outcomes), it’s not essential for learning. Boy, I sure hated math class in high school, but I definitely still learned and performed. According to the experts, “there is in fact a positive correlation between [learner] ratings and learning. But the correlation was very small! In fact, it was too small to have any practical value.” (Ruth Clark, 2010).
2. Content presented is content learned (Myth)
@sressler commented,”I think there are a lot of learning experiences that should count but often don’t.” Good point, Steve, and one of the useful distinctions to make here is between formal and informal learning. When we talk training, we’re often talking about formal, classroom type courses with specific learning objectives and an instructor, but there are so many other environments in which we actually learn…like GovLoop blogs for instance.
@acatania added that “Content USED is content learned. If someone can put their new knowledge to action, then they’re demonstrating some level of understanding of the concepts.” Absolutely! We see the best learning results from practice spaced over time, and the ability of a learner to bridge the gap from a training environment to the real world (aka “training transfer”) is a crucial element of actually changing performance.
And then there’s the memory rule of 7: “the typical human brain can hold about seven pieces of information for less than 30 seconds! (John Medina, 2008). Given that we forget more than we learn, it’s no wonder that learners can’t remember the third bullet point from slide 53 a week after training.
3. Everyone has a different learning style (Myth)
We’ve all heard of learning styles, right? You know, you answer a few questions on an assessment, and you’re pronounced a visual, auditory, or kinesthetic etc. learner. The finding often supports our gut instinct about how we learn most effectively, and the implication is that training professionals can maximize learning by connecting knowledge of learning style to instructional intervention.
While it is true that we all have our own learning preferences (i.e. I like to get my hands-dirty and just practice a new task), the research does not support the claim that we have learning styles (i.e. I am a kinesthetic learner, so I actually learn better with this instructional method independent of content/experience level) that are significant to training design.
Dr. Will Thalheimer of Work-Learning Research even went as far as to offer a reward for anyone that could validate this type of claim: “I will give $1000 (US dollars) to the first person or group who can prove that taking learning styles into account in designing instruction can produce meaningful learning benefits.” Since then, the reward has shot up fivefold with the pledge of several other experts in the field, but no prizes have been awarded. Read More.
4. Learners should be able to “test out” of a training course (Fact)
The easy answer here is “it depends” since there are a variety of issues–think compliance courses–that affect whether or not it makes sense for learners to be able to test out of training. However, I’d make the case for “fact” since a learner’s prior knowledge/experience with training content is so important (Ruth Clark, 2010).
Why should we spend our limited time and money training people on something they already know? The best training programs take into account learner prior knowledge rather than treating all learners as the same, and pre-tests can help improve our efficiency by indicating which learners need the most attention.
@MatthewGarlipp wrote that it also “depends on the type of testing. Please, no more SATs.” The real reason that “testing out” is such a tough sell is that we often don’t design assessments that really measure what they’re supposed to measure. Don’t you just love those “all of the above”- style tests?
5. eLearning Developers like making you miserable (Myth)
It turns out, most of us are pretty decent people….Training is great when it promotes learner participation and engagement to close a valid skill or knowledge gap. If training is boring to develop, than it’s definitely going to be boring for learners, and there are some awesome training professionals in government committed to making training fun and engaging.
There are a number of roles/jobs important to eLearning development (instructional designers, media developers, programmers, subject matter experts, project managers, QA etc), and organizations vary in terms of enabling and resourcing developers.
Dave Barton is part of the GovLoop Featured Blogger program, where we feature blog posts by government voices from all across the country (and world!). To see more Featured Blogger posts, click here.
So I agree on most of your points David, but number 4 is my sticking point. You and I have had great meat-space conversations on the complexities of attitudinal training and skill exercise. So let me bring them into cyberspace.
I think you’re right with things that are very mechanical in basis. But when we move more toward soft-skills and techniques, I think the test-out ability becomes far less practical. As an instructor, I’ve sat through the same course material focusing on leading discussions with NPS park visitors literally a dozen times. Each time, I feel my “muscles” strengthening and my thought processes gelling more and more.
Can I effectively use a fire extinguisher? That’s easy to test.
Can I effectively deploy the skills to include American citizens in dynamic dialogues with their fellow citizens on weighty topics, offering space for multiple perspectives and opportunities for revelation? That’s a training instance where “testing out of” would actually be detrimental IMHO. The muscle memory on that skillset takes a long, long time to build. Far longer than many mechanical skills. I’d argue that an active, engaged learner could sit through that course 100 times and still consider themselves a novice who had work to be done on their skillset.
John, what if the “learner” had a mentor or coach or (gasp) supervisor that could observe (with or without a checklist)? I would think the “learner” could test out through their performance.
Awesome, John, thanks. There are many different topics to train, and many ways to think about what kind of change we’re targeting (i.e. skill/knowledge/attitude), what kind of content we’re working with (fact, concept, principle, procedure, process), and what complexity level is being considered. It sounds like every time you have a chance for “dynamic dialogue,” there are some elements of practice and informal assessment, and the feedback makes you better the next time around.
Thanks for bringing this up. I’d love to hear what the community thinks.
Love it – curious your take Dave on my recent post about learning – https://www.govloop.com/helping-government-cross-chasm/