Centering the Learner in Assessment

By Zoe Goodman

If you’re in a people operations role, chances are, you’re constantly encountering opportunities for assessment. Engagement surveys, performance reviews, and which feedback platform might be a good fit for your company are just a few that come to mind. 

But why evaluate in the first place? For most people, it’s to gather data to see how something is working (or what might not be), but assessments in learning and development can actually serve a totally different – and underutilized – purpose. 

According to Dr. Adam Goodman, Director of Northwestern University’s Center for Leadership, in learning and development programs, a critical (and often overlooked) goal of assessments is to engage the learner. Data for program organizers, ideally, actually comes as a secondary benefit. What does that mean in practice? It means that you’re asking people to continue learning even during the assessment phase of a program. Assessment should focus primarily on reflection and next steps that serve the learner, not the program organizer.

Take LifeLabs Learning’s level 1-2 assessment, for example. After each workshop, we ask:

  • From 1-5, how useful was this workshop?
  • From 1-5, how knowledgeable was the facilitator?
  • From 1-5, how engaging was the content?
  • What did you learn today that you will apply?
  • When will you apply it?
  • What improvements can we make for future sessions?

You’ll notice a few key takeaways in this assessment. It’s short, taking about 2 minutes to complete. It also incorporates a mix of both qualitative and quantitative data. With a large enough sample size, the quantitative data can illuminate important patterns. The quick scaling questions off the bat also prepare a participant’s brain to start thinking quickly about what they learned.

However, the qualitative questions are where the real storytelling and “aha” moments happen. Participants are asked to name their next steps and put a timeline on it. They’re sharing what was useful (which is helpful for you as the program evaluator), but in a way that helps them retain the learning and becomes a call to action. These questions are also open ended, allowing participants to lead the conversation. Finally, the question that’s useful for LifeLabs (but really not so much for the learner) is the last question. People with a strong opinion will answer, but if there’s one they skip, it’s not the question that benefits them.

LifeLabs worked with data scientists at Stanford to come up with these questions (among many others that the team considered). One important takeaway: survey fatigue is real. Research shows, for example, that for each additional question you add (name, contact info, team, etc.), the response rate decreases about 2%. 

Call to action:

2020 is a fresh start to think about how you’re evaluating your learning and development programs. Below are some sample questions to ponder as you evaluate your past programming and look ahead to the rest of 2020:

  • Does the evaluation of your learning and development programming center on reflection and skill application or does it center on logistics and information that benefits the organizer?
  • Is your assessment concise, asking only what’s absolutely necessary for you, and focused primarily on what’s helpful for the learner?
  • Did you create your assessment before the program started (so that if you’d like to baseline, you can do so)?
  • Are there a mix of qualitative and quantitative questions?

By aligning yourself with the learner on assessment, you’re getting the data you need to decide where you might need to pivot, but in a way that gives employees another shot at learning stickiness – a win-win for both you and the learner.