The scenario is familiar. An important exam session has just ended. Results start coming in, and a question quickly follows:
Should passing really start at 60%? And if the failure rate is higher than expected, should the cutoff be lowered? On the other hand, if the stakes are high, should it be raised to be more cautious?
These reactions are understandable. But this is often the exact moment when mistakes begin.
Setting a passing score is not a simple administrative decision. It is a decision with direct consequences for academic paths, careers, admissions, or certifications. Above all, it is a decision the organization must be able to explain clearly and defend seriously.
Is 60% a good passing score?
In many settings, the passing score is still set at 60% or 70% out of habit. The problem is that a number like that, on its own, says nothing about the true difficulty of the exam.
An easy exam with a passing score of 60% may be too lenient. A much harder exam with the same passing score may, on the contrary, screen out candidates who are actually competent. In both cases, the cutoff gives an impression of objectivity, but it does not necessarily rest on a solid rationale.
So the real question is not: what percentage should we choose? The real question is: at what point can we say that a person has demonstrated what needed to be demonstrated?
That is the difference between a passing score set by habit and a passing score designed to support a fair decision.
The real risk: penalizing strong candidates for the wrong reasons
When the passing score is poorly set, the risk is not just theoretical.
On one side, the organization may allow through candidates who are not ready. On the other, it may fail competent people because of a poorly chosen cutoff, an exam that is too difficult, or a few flawed questions.
It is often this second risk that creates the most problems. A strong candidate may fail not because they lack the required competence, but because the line between pass and fail was set without a clear method. Sometimes, just a few tenths of a point are enough to tip a decision with major consequences.
In a hiring, certification, or admissions context, this can lead to frustration, complaints, review requests, and above all, a loss of confidence in the assessment itself.
Can you rely on exam results alone to set a passing score?
When faced with an unusually high failure rate, it is tempting to act quickly. Some organizations lower the cutoff to avoid a crisis. Others raise it to protect themselves more. In both cases, the danger is the same: the final decision is changed before anyone has understood what the results actually mean.
Before touching the passing score, a few simple questions need to be asked.
- Did the exam really measure the right competence?
- Were some questions ambiguous or misleading?
- Was the level of difficulty consistent with expectations?
- Was the scoring stable and consistent?
In other words, an overall result is never enough on its own to justify a passing threshold. You first need to look at the quality of the exam and the quality of the questions that make it up.
How to set a fair passing score for an exam?
A fair passing score starts with a clear definition of the expected level. You need to know, in concrete terms, what a person who just meets the minimum required level to pass looks like.
This reflection should not be done by one person alone. It requires structured judgment, supported by a group of experts who know the field, the real requirements, and the expected entry-level standard for the profession, program, or role.
The goal is not to decide how many people should pass. The goal is to determine the point at which the decision becomes reasonable and defensible.
That judgment must then be supported by a post-exam review. If some questions did not perform well, caused confusion, or failed to distinguish stronger candidates from weaker ones, they need to be reviewed seriously before final results are confirmed.
A fair passing score is therefore not an isolated number. It is the result of a method.
How do you know if your passing score is unreliable?
In practice, some warning signs should raise concerns quickly.
- For example, when no one can clearly explain why the passing score is 60% rather than 58% or 65%.
- Or when the cutoff has simply been kept because it has been used for a long time.
- Or when the threshold is changed after the fact to compensate for a poorly calibrated exam.
These situations do not necessarily mean the whole process is flawed. But they do show that the decision rests on a weaker foundation than it may appear.
And the more important the consequences of the exam, the harder that fragility becomes to defend.
Why this step quickly becomes difficult to manage manually
On paper, the process seems simple: define the expected level, review the exam, analyze problematic questions, confirm the passing score. In reality, this step quickly becomes demanding.
You need to keep track of the right versions of the exam, follow committee decisions, document adjustments, keep a record of removed questions, recalculate results when needed, and then be able to explain each decision weeks or months later.
When all of this is handled through separate files, emails, and spreadsheets, the risk of error increases quickly. And even when the work is done seriously, it becomes harder to prove.
Better documentation leads to better decisions
This is often where a good tool makes the difference. Not by replacing expert judgment, but by supporting it.
When exam statistics, decisions about questions, answer key versions, and the record of adjustments are all centralized in one place, it becomes much easier to support a fair and consistent decision.
It also helps turn a step that is often vague into a clear, repeatable, and defensible process.
To go further
A well-designed exam is not just clear or well structured. It must accurately measure the intended competence and support the right decision.
If you want to assess this in a practical way, this guide offers a clear starting point. It outlines five essential checks to help you design exams that are fairer, more consistent, and more reliable.
Download the guide to identify potential gaps in your assessments and strengthen their quality, step by step.
