How and when to leverage a calibrated classification model with scikit-learn
Rather than forecasting class values directly for a classification problem, it can be a matter of convenience to forecast the odds of an observation that belongs to every potential class.
Forecasting probabilities facilitates some flexibility which includes determining how to interpret the odds, putting forth predictions with uncertainty, and furnishing more nuanced ways to assess the skill of the model.
Forecasted probabilities that match the predicted/expected distribution of probabilities for every class are referenced to as calibrated. The issue is, not all machine learning models are capable of forecasting calibrated probabilities.
These are strategies to both diagnose how calibrated forecasted probabilities are and to better calibrate the forecasted odds with the observed distribution of every class. Usually, this can cause improved quality predictions, dependent on the how the skill of the model is assessed.
In this guideline, you will find out about the criticality of calibrating forecasted probabilities and how to diagnose and enhance the calibration of models leveraged for probabilistic classification.
After going through this guide, you will be aware of:
- Nonlinear machine learning algorithms typically forecast uncalibrated class probabilities.
- Reliability diagrams can be leveraged to diagnose the calibration of a model, and strategies can be leveraged to better calibrate forecasts for a problem.
- How to generate reliability diagrams and calibrate classification models in Python with scikit-learn.
Tutorial Summarization
This tutorial is subdivided into four portions, which are:
1] Forecasting probabilities
2] Calibrations of predictions
3] How to calibrate probabilities in Python
4] Worked instance of calibration of SVM probabilities
Forecasting Probabilities
A classification predictive modelling problem needs forecasting a label for a provided observation.
An alternative to forecasting the label directly, a model might forecast the odds of an observation that belongs to every potential class label.
This furnishes some flexibility both in the fashion forecasts are interpreted and put forth (selection of threshold and prediction uncertainty) and in the way the model is assessed.
Even though a model might be able to forecast odds, the distribution and behaviour of the odds might not match the predicted/expected distribution of observed probabilities in the training data.
This is particularly typical with complicated nonlinear machine learning algorithms that do not directly make probabilistic forecasts and rather leveraged approximations.
The distribution of the odds can be altered to better match the expected/predicted distribution indicated in the data. The adjustment is referenced to as calibration, as in the calibration of the model or the calibration of the distribution of class probabilities.
We desire that the predicted/estimated class probabilities reflect the true underlying odds of the sample. That is, the forecasted class probability (or probability-like value) requires to be ideally-calibrated. To be well-calibrated, the odds must efficiently reflect the actual likelihood of the event of interest.
Calibration of Predictions
There are two concerns in calibrating probabilities; they are diagnosing the calibration of forecasted odds and the calibration procedure itself.
Reliability Diagrams (Calibration Curves)
A reliability diagram is a line plot of the comparative frequency of what was observed (y-axis) vs. the forecasted odds frequency (x-axis).
Reliability diagrams are typical aids for demonstrating the attributes of probabilistic forecast systems. They include a plot of the observed comparative frequency against the forecasted probability, furnishing a swift visual intercomparison when tuning probabilistic forecast systems, in addition to documenting the performance of the complete product.
Particularly, the forecasted odds are divided up into a static number of buckets along the x-axis. The number of events (class=1) are then counted for every bin (for example , the comparative observed frequency). Lastly, the counts are normalized. The outcomes are then plotted as a line plot.
These plots are typically referenced to as ‘reliability’ diagrams in forecast lit, even though might also be called ‘calibration’ points or curves as they summarize how well the prediction probabilities are calculated.
The better calibrated or more reliant a prediction, the closer the points will prop up along the primary diagonal from the bottom left to the top right of the plot.
The position of the points or the curve comparative to the diagonal can assist to interpret the odds, for instance:
- Below the diagonal: The model has over-forecast, the probabilities are too big.
- Above the diagonal: The model has under-forecast, the probabilities are too minimal.
Probabilities, by their very nature are ongoing, continuous, so we predict and expect some degree of separation from the line, typically displayed as an S-shaped curve displaying pessimistic tendencies over-forecasting reduced probabilities and under-forecasting high probabilities.
“Reliability diagrams furnish a diagnostic to check out if the forecast value Xi is reliant. Broadly speaking, a probability forecast is reliant if the event actually occurs with an observed relative frequency consistent with the forecast value.
The reliability diagram can assist to comprehend the comparative calibration of the forecasts from differing predictive models.
Probability Calibration
The forecasts made by a predictive model can be calibrated.
Calibration forecasts might (or might not) have the outcome of an enhanced calibration on a readability diagram.
A few algorithms are fit in such a manner that their forecasted probabilities are already calibrated.
Without delving into details why, logistic regression is one such instance.
Other algorithms do not overtly generate predictions of probabilities, and rather a forecast of probabilities must be approximated. A few instances include; neural networks, support vector machines, and decision trees.
The forecasted probabilities from these strategies will probably be uncalibrated and may reap advantages from being modified through calibration.
Calibration of prediction probabilities is a rescaling operation that is applied after the forecasts have been made by a forecasted model.
There are two widespread strategies to calibrating probabilities, they are the Platt Scaling and Isotonic Regression.
Platt Scaling is simpler and is apt for reliability diagrams with the S-shape. Isotonic regression is more complicated, needs a ton more data, (otherwise it might overfit), but can assist reliability diagrams with differing shapes (is nonparametric)
Plat Scaling is most efficient when the distortion in the forecasted probabilities is sigmoid-shaped. Isotonic regression is a more potent calibration strategy that can rectify any monotonic distortion. Unluckily, this additional power comes at a price. A learning curve analysis displays that Isotonic Regression is more susceptible to overfitting, and therefore features worse performance in comparison to Platt Scaling, when data is scarce.
Observe, and this is really critical. Better calibrated probabilities might or might not lead to improved class-based or probability-based forecasts. It is really dependent on the particular metric leveraged to assess predictions.
As a matter of fact, a few empirical outcomes indicate that the algorithms that can reap more advantages from calibration of forecasted probabilities which include SVMs, bagged decision trees, and random forests.
How to Calibrate Probabilities in Python
The scikit-learn machine learning library enables you to both undertake diagnosis of the probability calibration of a classifier and calibration of a classifier that can forecast probabilities.
Diagnose Calibration
You can undertake diagnosis of the calibration of a classifier by developing a reliability diagram of the actual odds versus the forecasted probabilities on a test set.
Within scikit-learn, this is referred to as a calibration curve.
This can be implemented by initially calculating the calibration_curve() function. This function takes the true class values for a dataset and the forecasted probabilities for the primary class (class=1). The function returns the true odds for every bin and the forecasted probabilities for every bin. The number of bins can be mentioned through the n_bins argument and default to 5.
For instance, below is a code snippet displaying the API usage.
1 2 3 4 5 6 7 8 9 10 | … # predict probabilities probs = model.predic_proba(testX)[:,1] # reliability diagram fop, mpv = calibration_curve(testy, probs, n_bins=10) # plot perfectly calibrated pyplot.plot([0, 1], [0, 1], linestyle=’–‘) # plot model reliability pyplot.plot(mpv, fop, marker=’.’) pyplot.show() |
Calibrate Classifier
A classifier can be calibrated in scikit-learn leveraging the CalibratedClassifierCV class.
There are a couple of methods to leverage this class: prefit and cross-validation.
You can fit a model on a training dataset and calibrate this prefit model leveraging a hold out validation dataset.
For instance, below is a code snippet displaying the API usage.
1 2 3 4 5 6 7 8 9 10 11 12 13 | … # prepare data trainX, trainy = … valX, valy = … testX, testy = … # fit base model on training dataset model = … model.fit(trainX, trainy) # calibrate model on validation data calibrator = CalibratedClassifierCV(model, cv=’prefit’) calibrator.fit(valX, valy) # evaluate the model yhat = calibrator.predict(testX) |
Alternatively, the CalibratedClassiferCV can fit several copies of the model leveraging k-fold cross-validation and calibrate the odds forecasted by these models leveraging the hold out set. Forecasts are made leveraging each of the trained models.
For instance, below is a snippet of code displaying the API usage.
1 2 3 4 5 6 7 8 9 10 11 | … # prepare data trainX, trainy = … testX, testy = … # define base model model = … # fit and calibrate model on training data calibrator = CalibratedClassifierCV(model, cv=3) calibrator.fit(trainX, trainy) # evaluate the model yhat = calibrator.predict(testX) |
The Calibrated Classifier CV class assists two variants of probability calibration, particularly, the parametric ‘sigmoid’ strategy (Platt’s method) and the nonparametric ‘isotonic’ strategy which can be mentioned through the ‘method’ argument.
Worked Instance of Calibration of SVM Probabilities
We can make the discourse surrounding calibration concrete with a few worked instances.
In these instances, we will fit a support vector machine (SVM) to a noisy binary classification problem and leverage the model to forecast odds, then review the calibration leveraging a reliability diagram and calibrate the classifier and review the outcome.
SVM is a good candidate model to calibrate as it does not natively forecast probabilities, implying the probabilities are usually uncalibrated.
A note on SVM: odds can be forecasted by calling the decision_function() function on the fit model rather than the usual predict_proba() function. The odds are not normalized, but can be normalized when calling the calibration_curve() function by setting the ‘normalize’ argument to ‘True’
The instance here fits an SVM model on the test problem, forecasted probabilities, and plots the calibration of the probability is a reliability diagram.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | # SVM reliability diagram from sklearn.datasets import make_classification from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.calibration import calibration_curve from matplotlib import pyplot # generate 2 class dataset X, y = make_classification(n_samples=1000, n_classes=2, weights=[1,1], random_state=1) # split into train/test sets trainX, testX, trainy, testy = train_test_split(X, y, test_size=0.5, random_state=2) # fit a model model = SVC() model.fit(trainX, trainy) # predict probabilities probs = model.decision_function(testX) # reliability diagram fop, mpv = calibration_curve(testy, probs, n_bins=10, normalize=True) # plot perfectly calibrated pyplot.plot([0, 1], [0, 1], linestyle=’–‘) # plot model reliability pyplot.plot(mpv, fop, marker=’.’) pyplot.show() |
Running the instance develops a reliability diagram displaying the calibration of the SVM’s forecasted probabilities (solid line) contrasted to a perfectly calibrated model along the diagonal of the plot. (dashed line)
We can observe the expected S-shaped curve of a conservative forecast.
We can go about updating the instance to fit the SVM through the CalbiratedClassifierCV class leveraging 5-fold cross-validation, leveraging the holdout sets to calibrate the forecasted probabilities.
The complete instance is detailed below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | # SVM reliability diagram with calibration from sklearn.datasets import make_classification from sklearn.svm import SVC from sklearn.calibration import CalibratedClassifierCV from sklearn.model_selection import train_test_split from sklearn.calibration import calibration_curve from matplotlib import pyplot # generate 2 class dataset X, y = make_classification(n_samples=1000, n_classes=2, weights=[1,1], random_state=1) # split into train/test sets trainX, testX, trainy, testy = train_test_split(X, y, test_size=0.5, random_state=2) # fit a model model = SVC() calibrated = CalibratedClassifierCV(model, method=’sigmoid’, cv=5) calibrated.fit(trainX, trainy) # predict probabilities probs = calibrated.predict_proba(testX)[:, 1] # reliability diagram fop, mpv = calibration_curve(testy, probs, n_bins=10, normalize=True) # plot perfectly calibrated pyplot.plot([0, 1], [0, 1], linestyle=’–‘) # plot calibrated reliability pyplot.plot(mpv, fop, marker=’.’) pyplot.show() |
Running the instance develops a reliability diagram for the calibrated properties.
The shape of the calibrated odds is differing, hugging the diagonal line a lot better, even though still under-forecasting in the upper quadrant.
Visually, the plot indicates a model of improved calibration.
We can make the contrast amongst the two models more overt by including both reliability diagrams on the same plot.
The total instance is detailed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | # SVM reliability diagrams with uncalibrated and calibrated probabilities from sklearn.datasets import make_classification from sklearn.svm import SVC from sklearn.calibration import CalibratedClassifierCV from sklearn.model_selection import train_test_split from sklearn.calibration import calibration_curve from matplotlib import pyplot
# predict uncalibrated probabilities def uncalibrated(trainX, testX, trainy): # fit a model model = SVC() model.fit(trainX, trainy) # predict probabilities return model.decision_function(testX)
# predict calibrated probabilities def calibrated(trainX, testX, trainy): # define model model = SVC() # define and fit calibration model calibrated = CalibratedClassifierCV(model, method=’sigmoid’, cv=5) calibrated.fit(trainX, trainy) # predict probabilities return calibrated.predict_proba(testX)[:, 1]
# generate 2 class dataset X, y = make_classification(n_samples=1000, n_classes=2, weights=[1,1], random_state=1) # split into train/test sets trainX, testX, trainy, testy = train_test_split(X, y, test_size=0.5, random_state=2) # uncalibrated predictions yhat_uncalibrated = uncalibrated(trainX, testX, trainy) # calibrated predictions yhat_calibrated = calibrated(trainX, testX, trainy) # reliability diagrams fop_uncalibrated, mpv_uncalibrated = calibration_curve(testy, yhat_uncalibrated, n_bins=10, normalize=True) fop_calibrated, mpv_calibrated = calibration_curve(testy, yhat_calibrated, n_bins=10) # plot perfectly calibrated pyplot.plot([0, 1], [0, 1], linestyle=’–‘, color=’black’) # plot model reliabilities pyplot.plot(mpv_uncalibrated, fop_uncalibrated, marker=’.’) pyplot.plot(mpv_calibrated, fop_calibrated, marker=’.’) pyplot.show() |
Running the instance develops a singular reliability diagram displaying both the calibrated (orange) and uncalibrated (blue) probabilities.
It is not actually an apples-to-apples comparison as the forecasts made by the calibrated model are as a matter of fact of combo of five submodels.
Nonetheless, we do observe a marked difference in the reliance of the calibrated probabilities – very probably caused by the calibration procedure.)
Further Reading
This section furnishes additional resources on the subject if you are seeking to delve deeper.
Books and Papers
- Applied Predictive Modelling, 2013.
- Predicting Good Probabilities with Supervised Learning, 2005
- Obtaining calibrated probability estimates from decision trees and naïve Bayesian classifiers, 2001
- Increasing the reliability of reliability diagrams, 2007
API
- calibration.CalibratedClassifierCV API
- calibration.calibration_curve API
- Probability calibration, scikit-learn user guide
- Probability calibration curves, scikit-learn
- Comparison of calibration of classifiers, scikit-learn
Articles
- CAWCAR Verification Website
- Calibration (statistics) on Wikipedia
- Probabilistic classification on Wikipedia
- Scikit correct way to calibrate classifiers with CalibratedClassifierCV on CrossValidated
Conclusion
In this guide, you found out about the criticality of calibrating forecasting probabilities and how to diagnose and enhance the calibration of models leveraged for probabilistic classification.
Particularly, you learned:
- Nonlinear machine learning algorithms typically forecast uncalibrated class probabilities.
- Reliability diagrams can be leveraged to diagnose the calibration of a model, and strategies can be leveraged to better calibrate forecasts for a problem.
- How to generate reliability diagrams and calibrate classification models in Python with scikit-learn