• Home
  • Blog
  • Curiosity and Critical Thinking: Unveiling the Science of Measuring Learning Impact

Curiosity and Critical Thinking: Unveiling the Science of Measuring Learning Impact

Too often as learning and development (L&D) professionals, we struggle to prove the efficacy of our learning programs. Sometimes we don’t know quite what to measure. Even more often, we don’t know how to measure our impact. Why is that? To show credible impact, we need to approach measurement like scientists do.

Let’s say your sales department wants to improve sales volume. After a thorough needs analysis, you develop and deliver a rigorous sales skills training program. Now you want to know if it’s working. You look at the sales of your trainees and are thrilled to report that the trained salespeople are selling, on average, ten more units the month after training. Are these results credible? (Spoiler alert: probably not.)

Embrace the Science of Measurement

What are we really trying to do when we measure the impact of our learning programs? In essence, we are conducting a science experiment. And like any good science experiment, we must start with a hypothesis.

In our sales training example, our hypothesis could be, “Employees attending the new sales skills program will sell more units.”

But we need to be careful. Simply showing an increase in unit sales might not be the most credible metric to share with your stakeholders. What about the employees who did not attend? Did they improve? Was there a new product launch or more enticing customer incentives happening at the same time?

Measure More Accurately with an Observational Design Approach

In a science experiment, we need to account for other plausible explanations. One of the best ways to do this is to use test and control groups. The test group gets the new training, the control group does not.

This scientific approach is the same general method used in other disciplines. Researchers conducting clinical drug trials use it, and so do marketers evaluating an advertising campaign’s effectiveness. Does Drug A yield better effects than the placebo? Did Campaign A increase website traffic more than Campaign B? We compare the results between groups.

When comparing test and control groups, we should expand our hypothesis to reflect them, like so: “Employees attending the new sales skills program show greater sales improvement than those not attending.”

By framing the hypothesis this way, we set up our experiment. We identify how we will show success (or failure) by comparing the improvement in sales performance between our two groups.

In the L&D world, we rarely get the opportunity to run a true clinical trial, however tempting that may be. Instead, we follow an observational design approach. This means we study what’s already happening in our environment. We do not handpick the group that gets trained (the test group) versus the untrained group (the control group). We simply analyze the metrics from each group and compare them.

By truly testing our hypothesis and considering prior performance, our measurement is now more credible.

  • Original Results: On average, trained salespeople increased sales by ten units the month after training.
  • Revised Results: On average, trained salespeople increased sales by ten units the month after training. Untrained salespeople averaged a sales increase of six units during the same period.
  • Further Refined Results: On average, trained salespeople showed an incremental sales increase of four units over the untrained salespeople the month after training.

Get Curious About What You Can Uncover

Taking prior performance into account is critical, but often it is not enough. The observational study approach presents problems not found in the clinical trial, where participants are carefully selected for specific attributes (such as demographics or health history). With an observational study, we need to rule out plausible reasons besides our training why someone’s performance may have improved. If we don’t address these other factors, a skeptical stakeholder will certainly challenge our findings!

Consider questions like these:

  • Did newer salespeople benefit more from training?
  • Did results vary by region?
  • Did results vary by type of customer (e.g., size or industry)?

Get curious and ask questions to uncover whether our new training program contributed to the increase in sales or if it was something else. Exercising curiosity and critical thinking go a long way in identifying and ruling out plausible alternatives. In fact, each of the above questions becomes another hypothesis to prove or disprove.

As we test these alternatives, the chance that we will discover some very interesting things is high—and we will better understand what factors (in addition to training) are driving performance.

3 Steps to Improve Your Learning Measurement

1. Start with a Solid Hypothesis

Think about what you want to know and frame it as a hypothesis. Remember, a hypothesis is a well-thought-out proposition. For instance, we don’t simply hypothesize that “Our new onboarding program will be useful.” We hypothesize that “Newly hired employees completing our new onboarding program will have a lower 90-day turnover rate than new hires completing our old program.” The latter is much easier to test.

2. Put Your Hypothesis to the Test

Now it’s time to put that hypothesis to the test by pulling and analyzing data. Let’s assume this onboarding program posted good results. It showed a reduction in 90-day new hire turnover from 21% down to 12%. Can the onboarding program take all the credit? Now is the time to account for those other plausible explanations in your analysis. Get curious, exercise critical thinking, and dive in. Ask yourself, what else is going on that could affect my outcomes?

Let’s say that your organization also launched a mentorship program shortly after starting the new onboarding program. You will want to explore whether the onboarding or the mentoring (or a combination) led to the reduction. This requires adding another hypothesis or two and digging deeper.

Might some demographic variable influence the reduction in turnover as well? You may want to dig deeper into your analysis. Try segmenting the data by variables such as region, age of new hire, and level of education. Expanding your hypotheses to include factors like these will enrich both the insights you gain and the credibility of your results.

3. Plan to Take Action

At the outset of a measurement project, clarify your intent. Why are you measuring? How will you use your findings? Ask yourself:

  • Am I measuring to prove it worked and to show we experienced a return on investment?
  • Am I measuring because I genuinely want to know how to improve the program?
  • What will I do if the results are unfavorable?
  • What actions am I prepared to take based on the results?

Imagine if you discovered that new-hire turnover was influenced by the onboarding and mentoring programs both. What would you do? What if your sales training program only worked for salespeople selling into Customer Group B? How would you respond?

Asking these questions provides you with insights that are catalysts for action. Approaching measurement like a science experiment encourages deeper thinking into the cause and effect of workplace performance and encourages candid discussions about how to improve your offerings.

The Journey Toward Continuous Improvement

Measuring the impact of learning is a journey. Embrace it with a scientific mindset. Your measurement doesn’t have to be perfect—even modest measurement efforts will provide more information and insights than you had before you started! Sometimes, a simple and directionally positive shift is more than enough to initiate change.

Get started! We know you’ll be hooked.

Check out our Measurement Academy and learn how we help organizations uncover valuable L&D insights every day.

About the Authors

Bonnie Beresford
Bonnie Beresford, PhD, is a Senior Director of Performance and Learning Analytics at GP Strategies. Bonnie is widely known and respected as a leading authority on measuring the business impact of human capital programs. She is a popular conference speaker and co-author of the book, Developing Human Capital: Using Analytics to Plan and Optimize Your Learning and Development Investments. Bonnie will be your host for this learning experience. With over 20 years of experience in the field, her hallmark is linking investments in people to measurable business outcomes.

Get in touch.

Learn more about our talent transformation solutions.

Transformation doesn’t happen overnight if you’re doing it right. We continuously deliver measurable outcomes and help you stay the course – choose the right partner for your journey.

Our suite of offerings include:

  • Managed Learning Services
  • Learning Content Design & Development
  • Consulting
  • AI Readiness, Integration, & Support
  • Leadership & DEI Training
  • Technical Training
  • Learning Technologies & Implementation
  • Off-the-Shelf Training Courses