by Patrick Shane Gallagher, Ph.D (ADL). Shane is the Senior Advisor for Instructional Technology to ADL. He is also a senior consultant to the NSF Supporting NSF research programs as an advisor in education and interoperability issues concerning Cyberlearning, Innovations in Education (I3) and Race to the Top. Dr. Gallagher’s prior research included assessing SCORM for capability gaps in supporting advanced pedagogical models.
If you care about what is being learned, then you should care about what is being assessed. In designing learning interventions, if we value it being learned, then we should assess it.
Consider what is typically assessed in most e-learning experiences and it is quickly apparent that what is valued is mostly low-level skills, factual knowledge, and the memorization of procedures typically described as lower-level learning outcomes. Very often, it seems, we value only what the learner feels about the learning experience – a.k.a. “smile sheets” or level 1 evaluations (which are not assessments). Even when assessments are employed as knowledge checks, quizzes, or end-of-unit tests using multiple choice, true or false, fill-in-the-blank, or matching items, these rote assessment methods are not able to adequately assess higher order learning outcomes.
Previously, I stated that learning experiences are a model that will allow higher order learning outcomes to be realized. If the realization of higher order outcomes is what we value in a learning intervention, then we must also have a way to determine when they are met. In other words, they must be assessed. As the now-common assessment methods are not up to the job, the obvious choice is alternative assessments.
Alternative assessments are also known as performance-based assessments or authentic assessments. In the academic literature there are nuances between these labels but for most, they are thought of as basically the same. All are variants of performance assessment and all require students to generate rather than choose a response and all use approaches that include the use of open-ended assessment tasks calling upon students to apply their knowledge and skills to create a product or solve a problem.
Authentic assessment in particular mirrors and measures students’ performance in “real life” or authentic tasks and situations. It presents students with realistic tasks that are directly meaningful to their learning goals rather than indirectly meaningful. Important considerations, however, when designing authentic assessment tasks in learning experiences are “realistic in what context”? and “meaningful to whom?” These considerations directly relate to the adult learner’s context in the learning domain or job knowledge as well as his/her learning goals and needs. In a nutshell, an authentic assessment can assess the learner’s performance on a task that is representative of tasks encountered on the job and is meaningful to the learner’s role while performing the job.
An important aspect in the application of authentic assessment is that it is embedded within a learning experience and not administered as a separate component. This allows a continuous cycle of learning, assessment, learning, etc. So putting it in terms of learning experiences, authentic assessment could be a component of a pedagogical model or a stand-alone model bringing together the resources and communication structures needed to assess within the learning experience.
One example of a learning experience using a problem-based pedagogical model is that of a troubleshooting model instantiated in the domain of basic automobile troubleshooting. In this instance, the model could discover, establish communications with and make available to the learner a few components – a troubleshooter engine, a case library of previously solved problems, and a conceptual model of an automobile. The troubleshooter engine is the traffic cop managing presentation, inputs, and outputs. It can present a simulated interface(s); access the other resources such as automobile cases, automobile conceptual knowledge base, and generally manage the learner experience. As the learner solves problems such as diagnosing why the car won’t start on a cold day, assessment is also taking place. The learner’s choices are tracked and constantly compared to the “gold standard” of the knowledge base or conceptual model of the automobile. By solving the problems, competency levels can be assessed based on factors such as time to solve, alternative solutions used, etc. This method also allows the tracking of the learner’s problem-solving abilities from that of a novice to that of an expert. Placing assessment in this cycle is also thought of as formative assessment not unlike the formative evaluation process used in ISD but with different goals.
Another application of authentic assessment could occur in the area of science or math. Borrowing a little from the situational learning model of the Adventures of Jasper Woodbury series from Vanderbilt, mathematical solutions can and should be placed in the field. Using actors and video production, the series had 15 to 20-minute video stories about interesting situations the actors would get into but needed math to get out of. The actors were in their teens and the story lines were exciting, interesting and realistic.
For example, due to an emergency (someone getting hurt while hiking) and with limited transportation choices and limited gas, etc., they would need to know how to get from point A to point B quickly without running out of gas. By actually solving these problems as they arose, students watching the videos had an opportunity to solve realistic tasks in meaningful situations.
It doesn’t take much imagination to see how technology now allows students to actually be in the field gathering data, solving problems, and defending their solutions as a natural extension to the model presented from Vanderbilt. These models can naturally be extended in most knowledge domains requiring actual data use and problem solving and are by no means limited to K-12 education but should be part of the repeatable, reusable models comprising technology-based learning experiences. If we care about it, we should assess it, no matter what the outcome type.
As we move to applying learning experiences as models to allow higher order learning outcomes to be reached – a.k.a. problem solving (as an example), we need to employ the types of assessments that can actually assess them. As we move into design, development, and implementation of technologies, specifications, and standards (ADL Learning Experiences) that will not only increase the efficiency but also the quality of learning, we can’t afford to leave out assessment.
We can’t afford to continue determining how much someone weighs by using a ruler.