Skip to content

ACT Newsroom & Blog

Hide All News & Blogs View All News & Blogs

Why learning and measurement must go hand in hand

The number of learning products in the market seems to be increasing daily. My sons, Josh and Andrew, who will be entering 4th and 2nd grade in the fall, respectively, can attest to that fact.

Each new school year brings on a new wave of educational apps that supplement their classroom instruction. There’s one for math. One for reading. Some they like. And some they find frustrating. Some seem to adapt based on their performance. Some use gamification.

Josh and Andrew clearly have their own opinions about which apps they like and which apps they don’t. But what about what works?

As a mother (and a researcher), I want to know whether these apps actually help students improve their knowledge and skills. I also care about whether or not students enjoy using these apps; more on that point below.

Intersection of Learning and Measurement


Despite the fact that every learning product makes claims—either explicitly or implicitly—that using it will improve one or more intended learner outcomes, evidence supporting these claims is often lacking.

One reason for this is that it is extremely hard to do efficacy research—or rather, to do it well. In order to evaluate whether a learning tool is efficacious, the impact of the learning tool on the intended learner outcome needs to be measured. Therefore, the ability to collect efficacy evidence of a learning tool is constrained by the availability and appropriateness of a measurement tool (e.g., test) to estimate the amount of learning that has occurred. This is precisely why learning and measurement must go hand in hand.

But let’s step back. What do I mean by the “appropriateness” of a test to estimate learning? For one, a test that is used to measure the degree of learning should assess the same knowledge and skills that the learning product aims to improve—that is, there should be content alignment.

For example, if a learning tool was designed to improve students’ mastery of algebra, their performance on a geometry test is probably not the best indicator of the effectiveness of that learning product. This example is an oversimplification of the issue but it underscores the need for an efficacy framework that integrates both a validity argument for the measurement of learning and an efficacy argument for the impact on learning.

These ideas, along with other topics, are explored in a new ACT report, ACT’s Efficacy Framework: Combining Learning, Measurement, and Navigation to Improve Learner Outcomes. The Framework was developed to serve as a blueprint for researchers evaluating the efficacy of learning products to ensure efficacy claims are supported by evidence.


Users’ Reactions: A necessary but insufficient criteria of efficacy?


Now let’s return to Josh’s and Andrew’s preferences for certain apps and explore the notion of users’ reactions as a necessary but insufficient criteria of efficacy.

Even though most learning products ultimately want to improve specific knowledge, skills, abilities and/or other characteristics (KSAOs) and the desire to improve those specific KSAOs is based on research showing that they are needed for future educational and workplace success, it is also extremely important to consider more immediate learner outcomes such as users’ reactions (Kirkpatrick, 1959[1], 1976[2] ).

In particular, if students have a negative reaction to a learning tool, then they are less likely to use it. If they don’t use it, they are not likely to learn the content that is being taught. If they haven’t learned the new content, it is unlikely that they can apply these concepts to new situations. This chain of inaction is not likely to result in supporting evidence of the learning tool’s effectiveness at achieving the intended learner outcome (i.e. some operationalization of educational and workplace success).

I witnessed this firsthand with both of my boys who got frustrated with a particular math app that was being used in their classroom. The frustration led to disengagement. One son stopped using the app, while the other employed a random responding technique to finish the exercise as quickly as possible. Clearly, neither strategy was useful for improving math knowledge, nor for accurately measuring learning.

A lack of efficacy evidence may indicate that the product has poor learning content, lacks high quality instructional design, or it may indicate that the product’s user interface is difficult to navigate. Knowing not only if but why the product is not achieving the desired outcome is necessary evidence to drive product improvements and enhancements, as well as inform best practices around product usage.

ACT’s Efficacy Framework outlines seven sources of efficacy evidence—including evidence based on user experience and evidence based on use and implementation fidelity—as well as related research activities to evaluate not only if but why a learning product is working. Read the full report to learn more, and share it with fellow parents, educators and researchers in your networks to ensure that measurement and learning go hand in hand.

[1] Kirkpatrick, D. L. (1959). Techniques for evaluating training programs. Journal of the American Society of Training Directors, 13, 3–9.

[2] Kirkpatrick, D. L. (1976). Evaluation of training. In R. L. Craig (Ed.),Training and development handbook: A guide to human resource development (2nd ed., pp. 301–319). New York, NY: McGraw-Hill.

More from Krista:


Follow ACT

 Facebook   Twitter   LinkedIn   Instagram   YouTube   Pinterest


About ACT

ACT is a mission-driven, nonprofit organization dedicated to helping people achieve education and workplace success. Grounded in 60 years of research, ACT is a trusted leader in college and career readiness solutions. Each year, ACT serves millions of students, job seekers, schools, government agencies and employers in the US and around the world with learning resources, assessments, research and credentials designed to help them succeed from elementary school through career.

Top