Model Calibration | Towards Data Science https://towardsdatascience.com/tag/model-calibration/ The world’s leading publication for data science, AI, and ML professionals. Thu, 10 Apr 2025 19:22:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://towardsdatascience.com/wp-content/uploads/2025/02/cropped-Favicon-32x32.png Model Calibration | Towards Data Science https://towardsdatascience.com/tag/model-calibration/ 32 32 How to Measure Real Model Accuracy When Labels Are Noisy https://towardsdatascience.com/how-to-measure-real-model-accuracy-when-labels-are-noisy/ Thu, 10 Apr 2025 19:22:26 +0000 https://towardsdatascience.com/?p=605709 The math behind “true” accuracy and error correlation

The post How to Measure Real Model Accuracy When Labels Are Noisy appeared first on Towards Data Science.

]]>
Ground truth is never perfect. From scientific measurements to human annotations used to train deep learning models, ground truth always has some amount of errors. ImageNet, arguably the most well-curated image dataset has 0.3% errors in human annotations. Then, how can we evaluate predictive models using such erroneous labels?

In this article, we explore how to account for errors in test data labels and estimate a model’s “true” accuracy.

Example: image classification

Let’s say there are 100 images, each containing either a cat or a dog. The images are labeled by human annotators who are known to have 96% accuracy (Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ). If we train an image classifier on some of this data and find that it has 90% accuracy on a hold-out set (Aᵐᵒᵈᵉˡ), what is the “true” accuracy of the model (Aᵗʳᵘᵉ)? A couple of observations first:

  1. Within the 90% of predictions that the model got “right,” some examples may have been incorrectly labeled, meaning both the model and the ground truth are wrong. This artificially inflates the measured accuracy.
  2. Conversely, within the 10% of “incorrect” predictions, some may actually be cases where the model is right and the ground truth label is wrong. This artificially deflates the measured accuracy.

Given these complications, how much can the true accuracy vary?

Range of true accuracy

True accuracy of model for perfectly correlated and perfectly uncorrelated errors of model and label. Figure by author.

The true accuracy of our model depends on how its errors correlate with the errors in the ground truth labels. If our model’s errors perfectly overlap with the ground truth errors (i.e., the model is wrong in exactly the same way as human labelers), its true accuracy is:

Aᵗʳᵘᵉ = 0.90 — (1–0.96) = 86%

Alternatively, if our model is wrong in exactly the opposite way as human labelers (perfect negative correlation), its true accuracy is:

Aᵗʳᵘᵉ = 0.90 + (1–0.96) = 94%

Or more generally:

Aᵗʳᵘᵉ = Aᵐᵒᵈᵉˡ ± (1 — Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ)

It’s important to note that the model’s true accuracy can be both lower and higher than its reported accuracy, depending on the correlation between model errors and ground truth errors.

Probabilistic estimate of true accuracy

In some cases, inaccuracies among labels are randomly spread among the examples and not systematically biased toward certain labels or regions of the feature space. If the model’s inaccuracies are independent of the inaccuracies in the labels, we can derive a more precise estimate of its true accuracy.

When we measure Aᵐᵒᵈᵉˡ (90%), we’re counting cases where the model’s prediction matches the ground truth label. This can happen in two scenarios:

  1. Both model and ground truth are correct. This happens with probability Aᵗʳᵘᵉ × Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ.
  2. Both model and ground truth are wrong (in the same way). This happens with probability (1 — Aᵗʳᵘᵉ) × (1 — Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ).

Under independence, we can express this as:

Aᵐᵒᵈᵉˡ = Aᵗʳᵘᵉ × Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ + (1 — Aᵗʳᵘᵉ) × (1 — Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ)

Rearranging the terms, we get:

Aᵗʳᵘᵉ = (Aᵐᵒᵈᵉˡ + Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ — 1) / (2 × Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ — 1)

In our example, that equals (0.90 + 0.96–1) / (2 × 0.96–1) = 93.5%, which is within the range of 86% to 94% that we derived above.

The independence paradox

Plugging in Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ as 0.96 from our example, we get

Aᵗʳᵘᵉ = (Aᵐᵒᵈᵉˡ — 0.04) / (0.92). Let’s plot this below.

True accuracy as a function of model’s reported accuracy when ground truth accuracy = 96%. Figure by author.

Strange, isn’t it? If we assume that model’s errors are uncorrelated with ground truth errors, then its true accuracy Aᵗʳᵘᵉ is always higher than the 1:1 line when the reported accuracy is > 0.5. This holds true even if we vary Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ:

Model’s “true” accuracy as a function of its reported accuracy and ground truth accuracy. Figure by author.

Error correlation: why models often struggle where humans do

The independence assumption is crucial but often doesn’t hold in practice. If some images of cats are very blurry, or some small dogs look like cats, then both the ground truth and model errors are likely to be correlated. This causes Aᵗʳᵘᵉ to be closer to the lower bound (Aᵐᵒᵈᵉˡ — (1 — Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ)) than the upper bound.

More generally, model errors tend to be correlated with ground truth errors when:

  1. Both humans and models struggle with the same “difficult” examples (e.g., ambiguous images, edge cases)
  2. The model has learned the same biases present in the human labeling process
  3. Certain classes or examples are inherently ambiguous or challenging for any classifier, human or machine
  4. The labels themselves are generated from another model
  5. There are too many classes (and thus too many different ways of being wrong)

Best practices

The true accuracy of a model can differ significantly from its measured accuracy. Understanding this difference is crucial for proper model evaluation, especially in domains where obtaining perfect ground truth is impossible or prohibitively expensive.

When evaluating model performance with imperfect ground truth:

  1. Conduct targeted error analysis: Examine examples where the model disagrees with ground truth to identify potential ground truth errors.
  2. Consider the correlation between errors: If you suspect correlation between model and ground truth errors, the true accuracy is likely closer to the lower bound (Aᵐᵒᵈᵉˡ — (1 — Aᵍʳᵒᵘⁿᵈᵗʳᵘᵗʰ)).
  3. Obtain multiple independent annotations: Having multiple annotators can help estimate ground truth accuracy more reliably.

Conclusion

In summary, we learned that:

  1. The range of possible true accuracy depends on the error rate in the ground truth
  2. When errors are independent, the true accuracy is often higher than measured for models better than random chance
  3. In real-world scenarios, errors are rarely independent, and the true accuracy is likely closer to the lower bound

The post How to Measure Real Model Accuracy When Labels Are Noisy appeared first on Towards Data Science.

]]>
Understanding Model Calibration: A Gentle Introduction & Visual Exploration https://towardsdatascience.com/understanding-model-calibration-a-gentle-introduction-visual-exploration/ Tue, 11 Feb 2025 22:00:41 +0000 https://towardsdatascience.com/?p=597690 How Reliable Are Your Predictions? About To be considered reliable, a model must be calibrated so that its confidence in each decision closely reflects its true outcome. In this blog post we’ll take a look at the most commonly used definition for calibration and then dive into a frequently used evaluation measure for model calibration. […]

The post Understanding Model Calibration: A Gentle Introduction & Visual Exploration appeared first on Towards Data Science.

]]>
How Reliable Are Your Predictions?

About

To be considered reliable, a model must be calibrated so that its confidence in each decision closely reflects its true outcome. In this blog post we’ll take a look at the most commonly used definition for calibration and then dive into a frequently used evaluation measure for Model Calibration. We’ll then cover some of the drawbacks of this measure and how these surfaced the need for additional notions of calibration, which require their own new evaluation measures. This post is not intended to be an in-depth dissection of all works on calibration, nor does it focus on how to calibrate models. Instead, it is meant to provide a gentle introduction to the different notions and their evaluation measures as well as to re-highlight some issues with a measure that is still widely used to evaluate calibration.

Table of Contents

What is Calibration?

Calibration makes sure that a model’s estimated probabilities match real-world outcomes. For example, if a weather forecasting model predicts a 70% chance of rain on several days, then roughly 70% of those days should actually be rainy for the model to be considered well calibrated. This makes model predictions more reliable and trustworthy, which makes calibration relevant for many applications across various domains.

Reliability Diagram —  image by author

Now, what calibration means more precisely depends on the specific definition being considered. We will have a look at the most common notion in machine learning (ML) formalised by Guo and termed confidence calibration by Kull. But first, let’s define a bit of formal notation for this blog. 

In this blog post we consider a classification task with K possible classes, with labels Y ∈ {1, …, K} and a classification model :𝕏 → Δᴷ, that takes inputs in 𝕏 (e.g. an image or text) and returns a probability vector as its output. Δᴷ refers to the K-simplex, which just means that the output vector must sum to 1 and that each estimated probability in the vector is between 0 & 1. These individual probabilities (or confidences) indicate how likely an input belongs to each of the K classes.

Notation — image by author — input example sourced from Uma

1.1 (Confidence) Calibration

A model is considered confidence-calibrated if, for all confidences c, the model is correct c proportion of the time:

where (X,Y) is a datapoint and p̂ : 𝕏 → Δᴷ returns a probability vector as its output

This definition of calibration, ensures that the model’s final predictions align with their observed accuracy at that confidence level. The left chart below visualises the perfectly calibrated outcome (green diagonal line) for all confidences using a binned reliability diagram. On the right hand side it shows two examples for a specific confidence level across 10 samples.

Confidence Calibration  —  image by author

For simplification, we assume that we only have 3 classes as in image 2 (Notation) and we zoom into confidence c=0.7, see image above. Let’s assume we have 10 inputs here whose most confident prediction (max) equals 0.7. If the model correctly classifies 7 out of 10 predictions (true), it is considered calibrated at confidence level 0.7. For the model to be fully calibrated this has to hold across all confidence levels from 0 to 1. At the same level c=0.7, a model would be considered miscalibrated if it makes only 4 correct predictions.


2 Evaluating Calibration — Expected Calibration Error (ECE)

One widely used evaluation measure for confidence calibration is the Expected Calibration Error (ECE). ECE measures how well a model’s estimated probabilities match the observed probabilities by taking a weighted average over the absolute difference between average accuracy (acc) and average confidence (conf). The measure involves splitting all n datapoints into M equally spaced bins:

where B is used for representing “bins” and m for the bin number, while acc and conf are:

ŷᵢ is the model’s predicted class (arg max) for sample i and yᵢ is the true label for sample i. 1 is an indicator function, meaning when the predicted label ŷᵢ equals the true label yᵢ it evaluates to 1, otherwise 0. Let’s look at an example, which will clarify acc, conf and the whole binning approach in a visual step-by-step manner.

2.1 ECE — Visual Step by Step Example

In the image below, we can see that we have 9 samples indexed by i with estimated probabilities p̂(xᵢ) (simplified as p̂ᵢ) for class cat (C), dog (D) or toad (T). The final column shows the true class yᵢ and the penultimate column contains the predicted class ŷᵢ.

Table 1 — ECE toy example — image by author

Only the maximum probabilities, which determine the predicted label are used in ECE. Therefore, we will only bin samples based on the maximum probability across classes (see left table in below image). To keep the example simple we split the data into 5 equally spaced bins M=5. If we now look at each sample’s maximum estimated probability, we can group it into one of the 5 bins (see right side of image below).

Table 2 & Binning Diagram — image by author

We still need to determine if the predicted class is correct or not to be able to determine the average accuracy per bin. If the model predicts the class correctly (i.e.  yᵢ = ŷᵢ), the prediction is highlighted in green; incorrect predictions are marked in red:

Table 3 & Binning Diagram — image by author

We now have visualised all the information needed for ECE and will briefly run through how to

calculate the values for bin 5 (B). The other bins then simply follow the same process, see below.

Table 4 & Example for bin 5  — image by author

We can get the empirical probability of a sample falling into B, by assessing how many out of all 9 samples fall into B, see ( 1 ). We then get the average accuracy for B, see ( 2 ) and lastly the average estimated probability for B, see ( 3 ). Repeat this for all bins and in our small example of 9 samples we end up with an ECE of 0.10445. A perfectly calibrated model would have an ECE of 0.

For a more detailed, step-by-step explanation of the ECE, have a look at this blog post.

2.1.1  EXPECTED CALIBRATION ERROR DRAWBACKS

The images of binning above provide a visual guide of how ECE could result in very different values if we used more bins or perhaps binned the same number of items instead of using equal bin widths. Such and more drawbacks of ECE have been highlighted by several works early on. However, despite the known weaknesses ECE is still widely used to evaluate confidence calibration in ML. 

3 Most frequently mentioned Drawbacks of ECE

3.1 Pathologies — Low ECE ≠ high accuracy

A model which minimises ECE, does not necessarily have a high accuracy. For instance, if a model always predicts the majority class with that class’s average prevalence as the probability, it will have an ECE of 0. This is visualised in the image above, where we have a dataset with 10 samples, 7 of those are cat, 2 dog and only one is a toad. Now if the model always predicts cat with on average 0.7 confidence it would have an ECE of 0. There are more of such pathologies. To not only rely on ECE, some researchers use additional measures such as the Brier score or LogLoss alongside ECE.

Sample Pathology —  image by author

3.2 Binning Approach

One of the most frequently mentioned issues with ECE is its sensitivity to the change in binning. This is sometimes referred to as the Bias-Variance trade-off: Fewer bins reduce variance but increase bias, while more bins lead to sparsely populated bins increasing variance. If we look back to our ECE example with 9 samples and change the bins from 5 to 10 here too, we end up with the following:

More Bins Example — image by author

We can see that bin 8 and 9 each contain only a single sample and also that half the bins now contain no samples. The above is only a toy example, however since modern models tend to have higher confidence values samples often end up in the last few bins, which means they get all the weight in ECE, while the average error for the empty bins contributes 0 to ECE.

To mitigate these issues of fixed bin widths some authors have proposed a more adaptive binning approach:

Adaptive Bins Example — image by author

Binning-based evaluation with bins containing an equal number of samples are shown to have lower bias than a fixed binning approach such as ECE. This leads Roelofs to urge against using equal width binning and they suggest the use of an alternative: ECEsweep, which maximizes the number of equal-mass bins while ensuring the calibration function remains monotonic. The Adaptive Calibration Error (ACE) and Threshold Adaptive calibration Error (TACE) are two other variations of ECE that use flexible binning. However, some find it sensitive to the choice of bins and thresholds, leading to inconsistencies in ranking different models. Two other approaches aim to eliminate binning altogether: MacroCE does this by averaging over instance-level calibration errors of correct and wrong predictions and the KDE-based ECE does so by replacing the bins with non-parametric density estimators, specifically kernel density estimation (KDE).

3.3 Only maximum probabilities considered

Another frequently mentioned drawback of ECE is that it only considers the maximum estimated probabilities. The idea that more than just the maximum confidence should be calibrated, is best illustrated with a simple example:

Only Max. Probabilities — image by author — input example sourced from Schwirten

Let’s say we trained two different models and now both need to determine if the same input image contains a person, an animal or no creature. The two models output vectors with slightly different estimated probabilities, but both have the same maximum confidence for “no creature”. Since ECE only looks at these top values it would consider these two outputs to be the same. Yet, when we think of real-world applications we might want our self-driving car to act differently in one situation over the other. This restriction to the maximum confidence prompted various authors to reconsider the definition of calibration, which gives us two additional interpretations of confidence: multi-class and class-wise calibration.

3.3.1 MULTI-CLASS CALIBRATION

A model is considered multi-class calibrated if, for any prediction vector q=(q₁​,…,qₖ) ∈ Δᴷ​, the class proportions among all values of X for which a model outputs the same prediction p̂(X)=q match the values in the prediction vector q.

where (X,Y) is a datapoint and p̂ : 𝕏 → Δᴷ returns a probability vector as its output

What does this mean in simple terms? Instead of c we now calibrate against a vector q, with k classes. Let’s look at an example below:

Multi-Class Calibration — image by author

On the left we have the space of all possible prediction vectors. Let’s zoom into one such vector that our model predicted and say the model has 10 instances for which it predicted the vector q=[0.1,0.2,0.7]. Now in order for it to be multi-class calibrated, the distribution of the true (actual) class needs to match the prediction vector q. The image above shows a calibrated example with [0.1,0.2,0.7] and a not calibrated case with [0.1,0.5,0.4].

3.3.2 CLASS-WISE CALIBRATION

A model is considered class-wise calibrated if, for each class k, all inputs that share an estimated probability (X) align with the true frequency of class k when considered on its own:

where (X,Y) is a datapoint; q ∈ Δᴷ and p̂ : 𝕏 → Δᴷ returns a probability vector as its output

Class-wise calibration is a weaker definition than multi-class calibration as it considers each class probability in isolation rather than needing the full vector to align. The image below illustrates this by zooming into a probability estimate for class 1 specifically: q=0.1. Yet again, we assume we have 10 instances for which the model predicted a probability estimate of 0.1 for class 1. We then look at the true class frequency amongst all classes with q=0.1. If the empirical frequency matches q it is calibrated.

Class-Wise Calibration — image by author

To evaluate such different notions of calibration, some updates are made to ECE to calculate a class-wise error. One idea is to calculate the ECE for each class and then take the average. Others, introduce the use of the KS-test for class-wise calibration and also suggest using statistical hypothesis tests instead of ECE based approaches. And other researchers develop a hypothesis test framework (TCal) to detect whether a model is significantly mis-calibrated and build on this by developing confidence intervals for the L2 ECE.


All the approaches mentioned above share a key assumption: ground-truth labels are available. Within this gold-standard mindset a prediction is either true or false. However, annotators might unresolvably and justifiably disagree on the real label. Let’s look at a simple example below:

Gold-Standard Labelling | One-Hot-Vector —  image by author

We have the same image as in our entry example and can see that the chosen label differs between annotators. A common approach to resolving such issues in the labelling process is to use some form of aggregation. Let’s say that in our example the majority vote is selected, so we end up evaluating how well our model is calibrated against such ‘ground truth’. One might think, the image is small and pixelated; of course humans will not be certain about their choice. However, rather than being an exception such disagreements are widespread. So, when there is a lot of human disagreement in a dataset it might not be a good idea to calibrate against an aggregated ‘gold’ label. Instead of gold labels more and more researchers are using soft or smooth labels which are more representative of the human uncertainty, see example below:

Collective Opinion Labelling | Soft-label — image by author

In the same example as above, instead of aggregating the annotator votes we could simply use their frequencies to create a distribution Pᵥₒₜₑ over the labels instead, which is then our new yᵢ. This shift towards training models on collective annotator views, rather than relying on a single source-of-truth motivates another definition of calibration: calibrating the model against human uncertainty.

3.3.3 HUMAN UNCERTAINTY CALIBRATION

A model is considered human-uncertainty calibrated if, for each specific sample x, the predicted probability for each class k matches the ‘actual’ probability Pᵥₒₜₑ of that class being correct.

where (X,Y) is a datapoint and p̂ : 𝕏 → Δᴷ returns a probability vector as its output.

This interpretation of calibration aligns the model’s prediction with human uncertainty, which means each prediction made by the model is individually reliable and matches human-level uncertainty for that instance. Let’s have a look at an example below:

Human Uncertainty Calibration — image by author

We have our sample data (left) and zoom into a single sample x with index i=1. The model’s predicted probability vector for this sample is [0.1,0.2,0.7]. If the human labelled distribution yᵢ matches this predicted vector then this sample is considered calibrated.

This definition of calibration is more granular and strict than the previous ones as it applies directly at the level of individual predictions rather than being averaged or assessed over a set of samples. It also relies heavily on having an accurate estimate of the human judgement distribution, which requires a large number of annotations per item. Datasets with such properties of annotations are gradually becoming more available.

To evaluate human uncertainty calibration the researchers introduce three new measures: the Human Entropy Calibration Error (EntCE), the Human Ranking Calibration Score (RankCS) and the Human Distribution Calibration Error (DistCE).

where H(.) signifies entropy.

EntCE aims to capture the agreement between the model’s uncertainty H(ᵢ) and the human uncertainty H(yᵢ) for a sample i. However, entropy is invariant to the permutations of the probability values; in other words it doesn’t change when you rearrange the probability values. This is visualised in the image below:

EntCE drawbacks — image by author

On the left, we can see the human label distribution yᵢ, on the right are two different model predictions for that same sample. All three distributions would have the same entropy, so comparing them would result in 0 EntCE. While this is not ideal for comparing distributions, entropy is still helpful in assessing the noise level of label distributions.

where argsort simply returns the indices that would sort an array.

So, RankCS checks if the sorted order of estimated probabilities p̂ᵢ matches the sorted order of yᵢ for each sample. If they match for a particular sample i one can count it as 1; if not, it can be counted as 0, which is then used to average over all samples N.¹

Since this approach uses ranking it doesn’t care about the actual size of the probability values. The two predictions below, while not the same in class probabilities would have the same ranking. This is helpful in assessing the overall ranking capability of models and looks beyond just the maximum confidence. At the same time though, it doesn’t fully capture human uncertainty calibration as it ignores the actual probability values.

RankCS drawbacks  — image by author

DistCE has been proposed as an additional evaluation for this notion of calibration. It simply uses the total variation distance (TVD) between the two distributions, which aims to reflect how much they diverge from one another. DistCE and EntCE capture instance level information. So to get a feeling for the full dataset one can simply take the average expected value over the absolute value of each measure: E[∣DistCE∣] and E[∣EntCE∣]. Perhaps future efforts will introduce further measures that combine the benefits of ranking and noise estimation for this notion of calibration.

4 Final thoughts

We have run through the most common definition of calibration, the shortcomings of ECE and how several new notions of calibration exist. We also touched on some of the newly proposed evaluation measures and their shortcomings. Despite several works arguing against the use of ECE for evaluating calibration, it remains widely used. The aim of this blog post is to draw attention to these works and their alternative approaches. Determining which notion of calibration best fits a specific context and how to evaluate it should avoid misleading results. Maybe, however, ECE is simply so easy, intuitive and just good enough for most applications that it is here to stay?

This was accepted at the ICLR conference Blog Post Track & is estimated to appear on the site ~ April

In the meantime, you can cite/reference the ArXiv preprint.

Footnotes

¹In the paper it is stated more generally: If the argsorts match, it means the ranking is aligned, contributing to the overall RankCS score.

The post Understanding Model Calibration: A Gentle Introduction & Visual Exploration appeared first on Towards Data Science.

]]>
Model Calibration, Explained: A Visual Guide with Code Examples for Beginners https://towardsdatascience.com/model-calibration-explained-a-visual-guide-with-code-examples-for-beginners-55f368bafe72/ Fri, 10 Jan 2025 14:01:59 +0000 https://towardsdatascience.com/model-calibration-explained-a-visual-guide-with-code-examples-for-beginners-55f368bafe72/ When all models have similar accuracy, now what?

The post Model Calibration, Explained: A Visual Guide with Code Examples for Beginners appeared first on Towards Data Science.

]]>
MODEL EVALUATION & OPTIMIZATION

You’ve trained several Classification models, and they all seem to be performing well with high accuracy scores. Congratulations!

But hold on – is one model truly better than the others? Accuracy alone doesn’t tell the whole story. What if one model consistently overestimates its confidence, while another underestimates it? This is where model calibration comes in.

Here, we’ll see what model calibration is and explore how to assess the reliability of your models’ predictions – using visuals and practical code examples to show you how to identify calibration issues. Get ready to go beyond accuracy and light up the true potential of your machine learning models!

All visuals: Author-created using Canva Pro. Optimized for mobile; may appear oversized on desktop.
All visuals: Author-created using Canva Pro. Optimized for mobile; may appear oversized on desktop.

Understanding Calibration

Model calibration measures how well a model’s prediction probabilities match its actual performance. A model that gives a 70% probability score should be correct 70% of the time for similar predictions. This means its probability scores should reflect the true likelihood of its predictions being correct.

Why Calibration Matters

While accuracy tells us how often a model is correct overall, calibration tells us whether we can trust its probability scores. Two models might both have 90% accuracy, but one might give realistic probability scores while the other gives overly confident predictions. In many real applications, having reliable probability scores is just as important as having correct predictions.

Two models that are equally accurate (70% correct) show different levels of confidence in their predictions. Model A uses balanced probability scores (0.3 and 0.7) while Model B only uses extreme probabilities (0.0 and 1.0), showing it's either completely sure or completely unsure about each prediction.
Two models that are equally accurate (70% correct) show different levels of confidence in their predictions. Model A uses balanced probability scores (0.3 and 0.7) while Model B only uses extreme probabilities (0.0 and 1.0), showing it’s either completely sure or completely unsure about each prediction.

Perfect Calibration vs. Reality

A perfectly calibrated model would show a direct match between its prediction probabilities and actual success rates: When it predicts with 90% probability, it should be correct 90% of the time. The same applies to all probability levels.

However, most models aren’t perfectly calibrated. They can be:

  • Overconfident: giving probability scores that are too high for their actual performance
  • Underconfident: giving probability scores that are too low for their actual performance
  • Both: overconfident in some ranges and underconfident in others
Four models with the same accuracy (70%) showing different calibration patterns. The overconfident model makes extreme predictions (0.0 or 1.0), while the underconfident model stays close to 0.5. The over-and-under confident model switches between extremes and middle values. The well-calibrated model uses reasonable probabilities (0.3 for 'NO' and 0.7 for 'YES') that match its actual performance.
Four models with the same accuracy (70%) showing different calibration patterns. The overconfident model makes extreme predictions (0.0 or 1.0), while the underconfident model stays close to 0.5. The over-and-under confident model switches between extremes and middle values. The well-calibrated model uses reasonable probabilities (0.3 for ‘NO’ and 0.7 for ‘YES’) that match its actual performance.

This mismatch between predicted probabilities and actual correctness can lead to poor decision-making when using these models in real applications. This is why understanding and improving model calibration is necessary for building reliable machine learning systems.

📊 Dataset Used

To explore model calibration, we’ll continue with the same dataset used in my previous articles on Classification Algorithms: predicting whether someone will play golf or not based on weather conditions.

Columns: 'Overcast (one-hot-encoded into 3 columns)', 'Temperature' (in Fahrenheit), 'Humidity' (in %), 'Windy' (Yes/No) and 'Play' (Yes/No, target feature)
Columns: ‘Overcast (one-hot-encoded into 3 columns)’, ‘Temperature’ (in Fahrenheit), ‘Humidity’ (in %), ‘Windy’ (Yes/No) and ‘Play’ (Yes/No, target feature)
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split

# Create and prepare dataset
dataset_dict = {
    'Outlook': ['sunny', 'sunny', 'overcast', 'rainy', 'rainy', 'rainy', 'overcast', 
                'sunny', 'sunny', 'rainy', 'sunny', 'overcast', 'overcast', 'rainy',
                'sunny', 'overcast', 'rainy', 'sunny', 'sunny', 'rainy', 'overcast',
                'rainy', 'sunny', 'overcast', 'sunny', 'overcast', 'rainy', 'overcast'],
    'Temperature': [85.0, 80.0, 83.0, 70.0, 68.0, 65.0, 64.0, 72.0, 69.0, 75.0, 75.0,
                   72.0, 81.0, 71.0, 81.0, 74.0, 76.0, 78.0, 82.0, 67.0, 85.0, 73.0,
                   88.0, 77.0, 79.0, 80.0, 66.0, 84.0],
    'Humidity': [85.0, 90.0, 78.0, 96.0, 80.0, 70.0, 65.0, 95.0, 70.0, 80.0, 70.0,
                 90.0, 75.0, 80.0, 88.0, 92.0, 85.0, 75.0, 92.0, 90.0, 85.0, 88.0,
                 65.0, 70.0, 60.0, 95.0, 70.0, 78.0],
    'Wind': [False, True, False, False, False, True, True, False, False, False, True,
             True, False, True, True, False, False, True, False, True, True, False,
             True, False, False, True, False, False],
    'Play': ['No', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'No', 'Yes', 'Yes', 'Yes',
             'Yes', 'Yes', 'No', 'No', 'Yes', 'Yes', 'No', 'No', 'No', 'Yes', 'Yes',
             'Yes', 'Yes', 'Yes', 'Yes', 'No', 'Yes']
}
# Prepare data
df = pd.DataFrame(dataset_dict)

Before training our models, we normalized numerical weather measurements through standard scaling and transformed categorical features with one-hot encoding. These preprocessing steps ensure all models can effectively use the data while maintaining fair comparisons between them.

from sklearn.preprocessing import StandardScaler
df = pd.get_dummies(df, columns=['Outlook'], prefix='', prefix_sep='', dtype=int)
df['Wind'] = df['Wind'].astype(int)
df['Play'] = (df['Play'] == 'Yes').astype(int)

# Rearrange columns
column_order = ['sunny', 'overcast', 'rainy', 'Temperature', 'Humidity', 'Wind', 'Play']
df = df[column_order]

# Prepare features and target
X,y = df.drop('Play', axis=1), df['Play']
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.5, shuffle=False)

# Scale numerical features
scaler = StandardScaler()
X_train[['Temperature', 'Humidity']] = scaler.fit_transform(X_train[['Temperature', 'Humidity']])
X_test[['Temperature', 'Humidity']] = scaler.transform(X_test[['Temperature', 'Humidity']])

Models and Training

For this exploration, we trained four classification models to similar accuracy scores:

  • K-Nearest Neighbors (kNN)
  • Bernoulli Naive Bayes
  • Logistic Regression
  • Multi-Layer Perceptron (MLP)

For those who are curious with how those algorithms make prediction and their probability, you can refer to this article:

Predicted Probability, Explained: A Visual Guide with Code Examples for Beginners

While these models achieved the same accuracy in this simple problem, they calculate their prediction probabilities differently.

Even though the four models are correct 85.7% of the time, they show different levels of confidence in their predictions. Here, The MLP model tends to be very sure about its answers (giving values close to 1.0), while the kNN model is more careful, giving more varied confidence scores.
Even though the four models are correct 85.7% of the time, they show different levels of confidence in their predictions. Here, The MLP model tends to be very sure about its answers (giving values close to 1.0), while the kNN model is more careful, giving more varied confidence scores.
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
from sklearn.naive_bayes import BernoulliNB

# Initialize the models with the found parameters
knn = KNeighborsClassifier(n_neighbors=4, weights='distance')
bnb = BernoulliNB()
lr = LogisticRegression(C=1, random_state=42)
mlp = MLPClassifier(hidden_layer_sizes=(4, 2),random_state=42, max_iter=2000)

# Train all models
models = {
    'KNN': knn,
    'BNB': bnb,
    'LR': lr,
    'MLP': mlp
}

for name, model in models.items():
    model.fit(X_train, y_train)

# Create predictions and probabilities for each model
results_dict = {
    'True Labels': y_test
}

for name, model in models.items():
#    results_dict[f'{name} Pred'] = model.predict(X_test)
    results_dict[f'{name} Prob'] = model.predict_proba(X_test)[:, 1]

# Create results dataframe
results_df = pd.DataFrame(results_dict)

# Print predictions and probabilities
print("nPredictions and Probabilities:")
print(results_df)

# Print accuracies
print("nAccuracies:")
for name, model in models.items():
    accuracy = accuracy_score(y_test, model.predict(X_test))
    print(f"{name}: {accuracy:.3f}")

Through these differences, we’ll explore why we need to look beyond accuracy.

Measuring Calibration

To assess how well a model’s prediction probabilities match its actual performance, we use several methods and metrics. These measurements help us understand whether our model’s confidence levels are reliable.

Brier Score

The Brier Score measures the mean squared difference between predicted probabilities and actual outcomes. It ranges from 0 to 1, where lower scores indicate better calibration. This score is particularly useful because it considers both calibration and accuracy together.

The score (0.148) shows how well the model's confidence matches its actual performance. It's found by comparing the model's predicted chances with what actually happened (0 for 'NO', 1 for 'YES'), where smaller differences mean better predictions.
The score (0.148) shows how well the model’s confidence matches its actual performance. It’s found by comparing the model’s predicted chances with what actually happened (0 for ‘NO’, 1 for ‘YES’), where smaller differences mean better predictions.

Log Loss

Log Loss calculates the negative log probability of correct predictions. This metric is especially sensitive to confident but wrong predictions – when a model says it’s 90% sure but is wrong, it receives a much larger penalty than when it’s 60% sure and wrong. Lower values indicate better calibration.

For each prediction, it looks at how confident the model was in the correct answer. When the model is very confident but wrong (like in index 26), it gets a bigger penalty. The final score of 0.455 is the average of all these penalties, where lower numbers mean better predictions.
For each prediction, it looks at how confident the model was in the correct answer. When the model is very confident but wrong (like in index 26), it gets a bigger penalty. The final score of 0.455 is the average of all these penalties, where lower numbers mean better predictions.

Expected Calibration Error (ECE)

ECE measures the average difference between predicted and actual probabilities (taken as average of the label), weighted by how many predictions fall into each probability group. This metric helps us understand if our model has systematic biases in its probability estimates.

The predictions are grouped into 5 bins based on how confident the model was. For each group, we compare the model's average confidence to how often it was actually right. The final score (0.1502) tells us how well these match up, where lower numbers are better."
The predictions are grouped into 5 bins based on how confident the model was. For each group, we compare the model’s average confidence to how often it was actually right. The final score (0.1502) tells us how well these match up, where lower numbers are better."

Reliability Diagrams

Similar to ECE, a reliability diagram (or calibration curve) visualizes Model Calibration by binning predictions and comparing them to actual outcomes. While ECE gives us a single number measuring calibration error, the reliability diagram shows us the same information graphically. We use the same binning approach and calculate the actual frequency of positive outcomes in each bin. When plotted, these points show us exactly where our model’s predictions deviate from perfect calibration, which would appear as a diagonal line.

Like ECE, the predictions are grouped into 5 bins based on confidence levels. Each dot shows how often the model was actually right (up/down) compared to how confident it was (left/right). The dotted line shows perfect matching - the model's curve shows it sometimes thinks it's better or worse than it really is.
Like ECE, the predictions are grouped into 5 bins based on confidence levels. Each dot shows how often the model was actually right (up/down) compared to how confident it was (left/right). The dotted line shows perfect matching – the model’s curve shows it sometimes thinks it’s better or worse than it really is.

Comparing Calibration Metrics

Each of these metrics shows different aspects of calibration problems:

  • A high Brier Score suggests overall poor probability estimates.
  • High Log Loss points to overconfident wrong predictions.
  • A high ECE indicates systematic bias in probability estimates.

Together, these metrics give us a complete picture of how well our model’s probability scores reflect its true performance.

Our Models

For our models, let’s calculate the calibration metrics and draw their calibration curves:

from sklearn.metrics import brier_score_loss, log_loss
from sklearn.calibration import calibration_curve
import matplotlib.pyplot as plt

# Initialize models
models = {
    'k-Nearest Neighbors': KNeighborsClassifier(n_neighbors=4, weights='distance'),
    'Bernoulli Naive Bayes': BernoulliNB(),
    'Logistic Regression': LogisticRegression(C=1.5, random_state=42),
    'Multilayer Perceptron': MLPClassifier(hidden_layer_sizes=(4, 2), random_state=42, max_iter=2000)
}

# Get predictions and calculate metrics
metrics_dict = {}
for name, model in models.items():
    model.fit(X_train, y_train)
    y_prob = model.predict_proba(X_test)[:, 1]
    metrics_dict[name] = {
        'Brier Score': brier_score_loss(y_test, y_prob),
        'Log Loss': log_loss(y_test, y_prob),
        'ECE': calculate_ece(y_test, y_prob),
        'Probabilities': y_prob
    }

# Plot calibration curves
fig, axes = plt.subplots(2, 2, figsize=(8, 8), dpi=300)
colors = ['orangered', 'slategrey', 'gold', 'mediumorchid']

for idx, (name, metrics) in enumerate(metrics_dict.items()):
    ax = axes.ravel()[idx]
    prob_true, prob_pred = calibration_curve(y_test, metrics['Probabilities'], 
                                           n_bins=5, strategy='uniform')

    ax.plot([0, 1], [0, 1], 'k--', label='Perfectly calibrated')
    ax.plot(prob_pred, prob_true, color=colors[idx], marker='o', 
            label='Calibration curve', linewidth=2, markersize=8)

    title = f'{name}nBrier: {metrics["Brier Score"]:.3f} | Log Loss: {metrics["Log Loss"]:.3f} | ECE: {metrics["ECE"]:.3f}'
    ax.set_title(title, fontsize=11, pad=10)
    ax.grid(True, alpha=0.7)
    ax.set_xlim([-0.05, 1.05])
    ax.set_ylim([-0.05, 1.05])
    ax.spines[['top', 'right', 'left', 'bottom']].set_visible(False)
    ax.legend(fontsize=10, loc='upper left')

plt.tight_layout()
plt.show()

Now, let’s analyze the calibration performance of each model based on those metrics:

The k-Nearest Neighbors (KNN) model performs well at estimating how certain it should be about its predictions. Its graph line stays close to the dotted line, which shows good performance. It has solid scores – a Brier score of 0.148 and the best ECE score of 0.090. While it sometimes shows too much confidence in the middle range, it generally makes reliable estimates about its certainty.

The Bernoulli Naive Bayes model shows an unusual stair-step pattern in its line. This means it jumps between different levels of certainty instead of changing smoothly. While it has the same Brier score as KNN (0.148), its higher ECE of 0.150 shows it’s less accurate at estimating its certainty. The model switches between being too confident and not confident enough.

The Logistic Regression model shows clear issues with its predictions. Its line moves far away from the dotted line, meaning it often misjudges how certain it should be. It has the worst ECE score (0.181) and a poor Brier score (0.164). The model consistently shows too much confidence in its predictions, making it unreliable.

The Multilayer Perceptron shows a distinct problem. Despite having the best Brier score (0.129), its line reveals that it mostly makes extreme predictions – either very certain or very uncertain, with little in between. Its high ECE (0.167) and flat line in the middle ranges show it struggles to make balanced certainty estimates.

After examining all four models, the k-Nearest Neighbors clearly performs best at estimating its prediction certainty. It maintains consistent performance across different levels of certainty and shows the most reliable pattern in its predictions. While other models might score well in certain measures (like the Multilayer Perceptron’s Brier score), their graphs reveal they aren’t as reliable when we need to trust their certainty estimates.

Final Remark

When choosing between different models, we need to consider both their accuracy and calibration quality. A model with slightly lower accuracy but better calibration might be more valuable than a highly accurate model with poor probability estimates.

By understanding calibration and its importance, we can build more reliable machine learning systems that users can trust not just for their predictions, but also for their confidence in those predictions.

🌟 Model Calibration Code Summarized (1 Model)

import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.metrics import brier_score_loss, log_loss
from sklearn.calibration import calibration_curve
import matplotlib.pyplot as plt

# Define ECE
def calculate_ece(y_true, y_prob, n_bins=5):
    bins = np.linspace(0, 1, n_bins + 1)
    ece = 0
    for bin_lower, bin_upper in zip(bins[:-1], bins[1:]):
        mask = (y_prob >= bin_lower) &amp; (y_prob < bin_upper)
        if np.sum(mask) > 0:
            bin_conf = np.mean(y_prob[mask])
            bin_acc = np.mean(y_true[mask])
            ece += np.abs(bin_conf - bin_acc) * np.sum(mask)
    return ece / len(y_true)

# Create dataset and prepare data
dataset_dict = {
    'Outlook': ['sunny', 'sunny', 'overcast', 'rainy', 'rainy', 'rainy', 'overcast','sunny', 'sunny', 'rainy', 'sunny', 'overcast', 'overcast', 'rainy','sunny', 'overcast', 'rainy', 'sunny', 'sunny', 'rainy', 'overcast','rainy', 'sunny', 'overcast', 'sunny', 'overcast', 'rainy', 'overcast'],
    'Temperature': [85.0, 80.0, 83.0, 70.0, 68.0, 65.0, 64.0, 72.0, 69.0, 75.0, 75.0,72.0, 81.0, 71.0, 81.0, 74.0, 76.0, 78.0, 82.0, 67.0, 85.0, 73.0,88.0, 77.0, 79.0, 80.0, 66.0, 84.0],
    'Humidity': [85.0, 90.0, 78.0, 96.0, 80.0, 70.0, 65.0, 95.0, 70.0, 80.0, 70.0,90.0, 75.0, 80.0, 88.0, 92.0, 85.0, 75.0, 92.0, 90.0, 85.0, 88.0,65.0, 70.0, 60.0, 95.0, 70.0, 78.0],
    'Wind': [False, True, False, False, False, True, True, False, False, False, True,True, False, True, True, False, False, True, False, True, True, False,True, False, False, True, False, False],
    'Play': ['No', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'No', 'Yes', 'Yes', 'Yes','Yes', 'Yes', 'No', 'No', 'Yes', 'Yes', 'No', 'No', 'No', 'Yes', 'Yes','Yes', 'Yes', 'Yes', 'Yes', 'No', 'Yes']
}

# Prepare and encode data
df = pd.DataFrame(dataset_dict)
df = pd.get_dummies(df, columns=['Outlook'], prefix='', prefix_sep='', dtype=int)
df['Wind'] = df['Wind'].astype(int)
df['Play'] = (df['Play'] == 'Yes').astype(int)
df = df[['sunny', 'overcast', 'rainy', 'Temperature', 'Humidity', 'Wind', 'Play']]

# Split and scale data
X, y = df.drop('Play', axis=1), df['Play']
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.5, shuffle=False)
scaler = StandardScaler()
X_train[['Temperature', 'Humidity']] = scaler.fit_transform(X_train[['Temperature', 'Humidity']])
X_test[['Temperature', 'Humidity']] = scaler.transform(X_test[['Temperature', 'Humidity']])

# Train model and get predictions
model = BernoulliNB()
model.fit(X_train, y_train)
y_prob = model.predict_proba(X_test)[:, 1]

# Calculate metrics
metrics = {
    'Brier Score': brier_score_loss(y_test, y_prob),
    'Log Loss': log_loss(y_test, y_prob),
    'ECE': calculate_ece(y_test, y_prob)
}

# Plot calibration curve
plt.figure(figsize=(6, 6), dpi=300)
prob_true, prob_pred = calibration_curve(y_test, y_prob, n_bins=5, strategy='uniform')

plt.plot([0, 1], [0, 1], 'k--', label='Perfectly calibrated')
plt.plot(prob_pred, prob_true, color='slategrey', marker='o', 
        label='Calibration curve', linewidth=2, markersize=8)

title = f'Bernoulli Naive BayesnBrier: {metrics["Brier Score"]:.3f} | Log Loss: {metrics["Log Loss"]:.3f} | ECE: {metrics["ECE"]:.3f}'
plt.title(title, fontsize=11, pad=10)
plt.grid(True, alpha=0.7)
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.gca().spines[['top', 'right', 'left', 'bottom']].set_visible(False)
plt.legend(fontsize=10, loc='lower right')

plt.tight_layout()
plt.show()

🌟 Model Calibration Code Summarized (4 Models)

import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import brier_score_loss, log_loss
from sklearn.calibration import calibration_curve
import matplotlib.pyplot as plt

# Define ECE
def calculate_ece(y_true, y_prob, n_bins=5):
    bins = np.linspace(0, 1, n_bins + 1)
    ece = 0
    for bin_lower, bin_upper in zip(bins[:-1], bins[1:]):
        mask = (y_prob >= bin_lower) &amp; (y_prob < bin_upper)
        if np.sum(mask) > 0:
            bin_conf = np.mean(y_prob[mask])
            bin_acc = np.mean(y_true[mask])
            ece += np.abs(bin_conf - bin_acc) * np.sum(mask)
    return ece / len(y_true)

# Create dataset and prepare data
dataset_dict = {
    'Outlook': ['sunny', 'sunny', 'overcast', 'rainy', 'rainy', 'rainy', 'overcast','sunny', 'sunny', 'rainy', 'sunny', 'overcast', 'overcast', 'rainy','sunny', 'overcast', 'rainy', 'sunny', 'sunny', 'rainy', 'overcast','rainy', 'sunny', 'overcast', 'sunny', 'overcast', 'rainy', 'overcast'],
    'Temperature': [85.0, 80.0, 83.0, 70.0, 68.0, 65.0, 64.0, 72.0, 69.0, 75.0, 75.0,72.0, 81.0, 71.0, 81.0, 74.0, 76.0, 78.0, 82.0, 67.0, 85.0, 73.0,88.0, 77.0, 79.0, 80.0, 66.0, 84.0],
    'Humidity': [85.0, 90.0, 78.0, 96.0, 80.0, 70.0, 65.0, 95.0, 70.0, 80.0, 70.0,90.0, 75.0, 80.0, 88.0, 92.0, 85.0, 75.0, 92.0, 90.0, 85.0, 88.0,65.0, 70.0, 60.0, 95.0, 70.0, 78.0],
    'Wind': [False, True, False, False, False, True, True, False, False, False, True,True, False, True, True, False, False, True, False, True, True, False,True, False, False, True, False, False],
    'Play': ['No', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'No', 'Yes', 'Yes', 'Yes','Yes', 'Yes', 'No', 'No', 'Yes', 'Yes', 'No', 'No', 'No', 'Yes', 'Yes','Yes', 'Yes', 'Yes', 'Yes', 'No', 'Yes']
}

# Prepare and encode data
df = pd.DataFrame(dataset_dict)
df = pd.get_dummies(df, columns=['Outlook'], prefix='', prefix_sep='', dtype=int)
df['Wind'] = df['Wind'].astype(int)
df['Play'] = (df['Play'] == 'Yes').astype(int)
df = df[['sunny', 'overcast', 'rainy', 'Temperature', 'Humidity', 'Wind', 'Play']]

# Split and scale data
X, y = df.drop('Play', axis=1), df['Play']
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.5, shuffle=False)
scaler = StandardScaler()
X_train[['Temperature', 'Humidity']] = scaler.fit_transform(X_train[['Temperature', 'Humidity']])
X_test[['Temperature', 'Humidity']] = scaler.transform(X_test[['Temperature', 'Humidity']])

# Initialize models
models = {
    'k-Nearest Neighbors': KNeighborsClassifier(n_neighbors=4, weights='distance'),
    'Bernoulli Naive Bayes': BernoulliNB(),
    'Logistic Regression': LogisticRegression(C=1.5, random_state=42),
    'Multilayer Perceptron': MLPClassifier(hidden_layer_sizes=(4, 2), random_state=42, max_iter=2000)
}

# Get predictions and calculate metrics
metrics_dict = {}
for name, model in models.items():
    model.fit(X_train, y_train)
    y_prob = model.predict_proba(X_test)[:, 1]
    metrics_dict[name] = {
        'Brier Score': brier_score_loss(y_test, y_prob),
        'Log Loss': log_loss(y_test, y_prob),
        'ECE': calculate_ece(y_test, y_prob),
        'Probabilities': y_prob
    }

# Plot calibration curves
fig, axes = plt.subplots(2, 2, figsize=(8, 8), dpi=300)
colors = ['orangered', 'slategrey', 'gold', 'mediumorchid']

for idx, (name, metrics) in enumerate(metrics_dict.items()):
    ax = axes.ravel()[idx]
    prob_true, prob_pred = calibration_curve(y_test, metrics['Probabilities'], 
                                           n_bins=5, strategy='uniform')

    ax.plot([0, 1], [0, 1], 'k--', label='Perfectly calibrated')
    ax.plot(prob_pred, prob_true, color=colors[idx], marker='o', 
            label='Calibration curve', linewidth=2, markersize=8)

    title = f'{name}nBrier: {metrics["Brier Score"]:.3f} | Log Loss: {metrics["Log Loss"]:.3f} | ECE: {metrics["ECE"]:.3f}'
    ax.set_title(title, fontsize=11, pad=10)
    ax.grid(True, alpha=0.7)
    ax.set_xlim([-0.05, 1.05])
    ax.set_ylim([-0.05, 1.05])
    ax.spines[['top', 'right', 'left', 'bottom']].set_visible(False)
    ax.legend(fontsize=10, loc='upper left')

plt.tight_layout()
plt.show()

Technical Environment

This article uses Python 3.7 and scikit-learn 1.5. While the concepts discussed are generally applicable, specific code implementations may vary slightly with different versions.

About the Illustrations

Unless otherwise noted, all images are created by the author, incorporating licensed design elements from Canva Pro.

𝙎𝙚𝙚 𝙢𝙤𝙧𝙚 𝙈𝙤𝙙𝙚𝙡 𝙀𝙫𝙖𝙡𝙪𝙖𝙩𝙞𝙤𝙣 & 𝙊𝙥𝙩𝙞𝙢𝙞𝙯𝙖𝙩𝙞𝙤𝙣 𝙢𝙚𝙩𝙝𝙤𝙙𝙨 𝙝𝙚𝙧𝙚:

Model Evaluation & Optimization

𝙔𝙤𝙪 𝙢𝙞𝙜𝙝𝙩 𝙖𝙡𝙨𝙤 𝙡𝙞𝙠𝙚:

Ensemble Learning

Classification Algorithms

The post Model Calibration, Explained: A Visual Guide with Code Examples for Beginners appeared first on Towards Data Science.

]]>
Neural Network Calibration using PyTorch https://towardsdatascience.com/neural-network-calibration-using-pytorch-c44b7221a61/ Thu, 24 Sep 2020 11:26:39 +0000 https://towardsdatascience.com/neural-network-calibration-using-pytorch-c44b7221a61/ Make your model usable for safty-critical applications with a few lines of code.

The post Neural Network Calibration using PyTorch appeared first on Towards Data Science.

]]>
Make your model usable for safty-critical applications with a few lines of code.
Photo by Greg Shield on Unsplash
Photo by Greg Shield on Unsplash

Imagine you are a radiologist working in this new high-tech hospital. Last week you got your first neural-network based model to assist you making diagnoses given your patients data and eventually improving your accuracy. But wait! Very much like us humans, synthetic models are never 100% accurate in their predictions. But how do we know if a model is absolutely certain or if it just barely surpasses the point of guessing? This knowledge is crucial for right interpretation and key for selecting appropriate treatment.

Assuming you’re more of an engineer: This scenario is also highly relevant for autonomous driving where a car constantly has to make decisions whether there is an obstacle in front of it or not. Ignoring uncertainties can get ugly real quick here.

If you are like 90% of the Deep Learning community (including past me) you just assumed that the predictions produced by the Softmax function represent probabilities since they are neatly squashed into the domain [0,1]. This is a popular pitfall since these predictions generally tend to be overconfident. As we’ll see soon this behaviour is affected by a variety of architectural choices like the use of Batch Normalization or the number of layers.

You can find a interactive Google Colab notebook with all the code here.

Reliability Plots

As we know now, it is desirable to output calibrated confidences instead of their raw counterparts. To get an intuitive understanding of how well a specific architecture performs in this regard, Realiability Diagramms are often used.

Reliability Plot for a ResNet101 trained for 10 Epochs on CIFAR10 (Image by author)
Reliability Plot for a ResNet101 trained for 10 Epochs on CIFAR10 (Image by author)

Summarized in one sentence, Reliability Plots show how well the predicted confidence scores hold up against their actual accuracy. Hence, given 100 predictions each with confidence of 0.9, we expect 90% of them to be correct if the model is perfectly calibrated.

To fully understand what’s going on we need to dig a bit deeper. As we can see from the plot, all the confidence scores of the test set are binned into M=10 distinct bins [0, 0.1), [0.1, 0.2),…, [0.9, 1]. For each bin we can then calculate its accuracy

and confidence

Both values are then visualized as a Bar plot with the identity line indicating perfect calibration.

Metrics

Diagrams and plots are just one side of the story. In order to score a model based on its Calibration Error we need to define metrics. Fortunately, both metrics most often used here are really intuitive.

The Expected Calibration Error (ECE) simply takes a weighted average over the absolute accuracy/confidence difference.

For safety critical applications, like described above, it may be useful to measure the maximum discrepancy between accuracy and confidence. This can be accomplished by using the Maximum Calibration Error (MCE).

Temperature Scaling

We now want to focus on how to tackle this issue. While many solutions like Histogram Binning, Isotonic Regression, Bayesian Binning into Quantiles (BBQ) and Platt Scaling exist (with their corresponding extensions for multiclass problems), we want to focus on Temperature Scaling. This is due to the fact that it is the easiest to implement while giving the best results out of the other algorithms named above.

To fully understand it we need to take a step back and look at the outputs of a neural network. Assuming a multi-class problem, the last layer of a network outputs the logits zᵢ ∈ ℝ. The predicted probability can then be obtained using the Softmax function σ.

Temperature scaling directly works on the logits z(Not the predicted probabilities!!) and scales them using a single parameter T>0 for all classes. The calibrated confidence can then be obtained by

It is important to note that the parameter T is optimized with repect to the Negative-Log-Likelihood (NLL) loss on the validation set and the network’s parameters are fixed during this stage.

Results

Reliability Plot for a ResNet101 trained for 10 Epochs on CIFAR10 and calibrated using Temperature Scaling (Image by author)
Reliability Plot for a ResNet101 trained for 10 Epochs on CIFAR10 and calibrated using Temperature Scaling (Image by author)

As we can see from the figure, the bars are now way closer to the identity line, indicating almost perfect calibration. We can also see this looking at the metrics. The ECE dropped from 2.10% to 0.25% and the MCE from 27.27% to 3.86%, which is a drastic improvement.

Implementation in PyTorch

As promised, the implementation in PyTorch is rather straight forward.

First we define the _Tscaling method returning the calibrated confidences given a specific temperature T together with the logits.

In the next step the parameter T has to be estimated using the LBGFS algorithm. This should only take a couple of seconds on a GPU.

You are welcome to play around in the Google Colab Notebook I created here.

Conclusion

As shown in this article, network calibration can be accomplished in just a few lines of code with drastic improvements. If there is enough interest I’m happy to discuss other approaches for Model Calibration in another Medium article. If you are interested in a deeper dive into this topic I highly recommend the Paper "On calibration of Neural Networks" by Guo et al..

Cheers!

The post Neural Network Calibration using PyTorch appeared first on Towards Data Science.

]]>