Empowering the Weak Knowledge Check with Learner Confidence

What’s wrong with a knowledge check?

As an e-learning developer, I’ll admit it: creating multiple-choice question (MCQ) knowledge checks isn’t my favourite task. Often, they miss the mark on how people truly learn. Yet, because they’re quick and easy to develop, they’re seen as cost-effective. They also provide metrics that can be measured. But here’s the catch—knowledge checks can create the illusion of learning.

The biggest misconception? That simply viewing content once means the learner has absorbed it all.

That said, knowledge checks can be powerful learning tools—if they’re planned well. A key element is feedback. Sadly, feedback is often an afterthought, if it’s included at all. Sometimes, it’s nothing more than a checkmark or a cross. While this shows the learner if they’re right or wrong, it doesn’t help them understand why.
Good feedback goes beyond this. It should guide the learner, helping them not only recognize mistakes but also prepare to apply the knowledge in real-life situations.

How can we make knowledge checks meaningful assessment tools?

The most important thing is relevance. Knowledge checks should connect directly to the learner’s role and daily tasks.
An e-learning developer is unlikely to have deep expertise in every subject area, so it’s crucial to collaborate with a Subject Matter Expert (SME). And while AI is becoming more sophisticated, it’s not quite ready to take the reins here.
Here’s my six-step approach to improving knowledge checks:

1. Situation

Work with your SME to ensure questions are grounded in real-world contexts. Training often aims to bring about a behaviour change, so the questions should reflect situations where the learner would apply their knowledge.

Think of scenarios the learner might encounter:

  • “This happened—how would you react?”

Situational questions make the knowledge more tangible and meaningful.

2. Avoid Obviously Wrong Options

Make learners think. Avoid answers that are too easy to dismiss. Ideally, include three options: one correct answer and two plausible ones that reflect common misunderstandings.

Crafting these takes time—it’s an art! In a perfect setup, all three answers could seem valid at first glance, but only one aligns fully with the concept.

3. Choose a Reason

This is where the magic happens. After selecting their answer, the learner is presented with two reasons to explain their choice. These reasons should:

  • Highlight why the answer is correct.
  • Clarify misunderstandings if the answer is wrong.

This approach creates a moment of reflection, prompting the learner to think critically about their choice. It’s also a chance to make the process engaging—feel free to add a touch of creativity or humour, as long as it aligns with the content.

The goal is to create a slight ‘conflict’ for the learner. If they’re thinking, “Both reasons seem right—how do I decide?”, you’re on the right track.

4. Confidence Intervals

This step adds depth to the process. After choosing an answer, ask the learner how confident they feel about their choice. Confidence intervals are not new, but they’re a powerful tool to encourage serious engagement.
I recommend a four-level scale:

  • Very Confident
  • Confident
  • Unconfident
  • Very Unconfident

Why four levels? It forces learners to take a stand, avoiding the middle ground of a five-point scale.
Confidence intervals also play into scoring, rewarding learners for being both correct and confident.

5. Generating a score

Scoring should reflect not just the correctness of the answer but also the learner’s confidence. For example:

  • Correct and Very Confident: 10 points
  • Correct but Very Unconfident: 2 points
  • Incorrect and Very Confident: -5 points

This system encourages learners to be thoughtful. Confidence is rewarded, but overconfidence without understanding is penalized. Here’s an example scale I’ve used: [10, 7, 5, 2, 1, -1, -2, -5].

6. Feedback

Take the time to explain each option — why it’s correct or incorrect — and do this for all six combinations of answer and reason. Even for the correct answers, explain why they’re the best choice.

While this can be time-intensive, it’s essential for helping learners truly grasp the material. Keep explanations concise but thorough enough to add value.

Additional thoughts and variations

Flexibility in early stages: At the start of a course, allow learners to go back and forth between answers and reasons. This helps them familiarize themselves with the process. Later in the course, you can lock in their choices to emphasize decision-making consequences.

Encourage improvement: Let learners replay activities to improve their score. This adds an element of gamification without making it competitive. Avoid leaderboards; instead, focus on personal bests or comparisons to previous attempts.

Data tracking: If you’re collecting data, there’s a wealth of insight to be gained. You can track:

  • First-choice answers and reasons.
  • Confidence levels.
  • Final scores after retries.

Make it clear to learners that their data will be recorded — it can boost motivation. Metrics can also inform future workshops, highlighting areas where learners struggled: “The data shows many people had trouble with this question. Let’s explore it further.

Final tip: Keep it simple

Don’t overcomplicate the scenarios or wording. Refine everything—descriptions, questions, options, reasons, and feedback—to a minimum viable product. Clear and concise content is always more effective.