The article “Conformal Prediction – A Practical Guide with MAPIE” first appeared on AlgoTrading101 Blog.
Excerpt
What is Conformal Prediction?
Conformal Prediction is a set of algorithms that assess the uncertainty of predictions produced by a machine learning model.
What is Conformal Prediction used for?
Conformal Prediction is often used for calibrating one’s machine learning models, estimating their uncertainty, comparing models, increasing the practical performance of the model, and more.
Why should I use Conformal Prediction?
- Conformal Prediction is not too computationally expensive
- It is easy to implement
- On average, it performs better than other calibration methods
- Can be applied to any machine learning model or neural network
- Can be used in a wide variety of tasks such as regression, classification, outlier detection, and more
- Doesn’t depend on the distribution of the data
- Tackles uncertainty in predictions
Why shouldn’t I use Conformal Prediction?
- Conformal Prediction is a fairly new approach
- There aren’t many out-of-the-box libraries that support it
- You “ditch” the standard way of viewing and interpreting your results (e.g. predictions sets instead of point predictions)
- At the time of writing this article, it isn’t widely adopted and there aren’t any books on it
- It is still “stuck” more in the academic place than in the practical space
How can Conformal Prediction be used in Finance?
Conformal Prediction can be used in Finance in many ways. It seems to be most promising when it comes to forecasting, addressing repayment uncertainty, modeling algorithms in volatile market settings, and more.
How can Conformal Prediction be used in Algorithmic Trading?
Conformal Prediction can help address the uncertainty surrounding trading hypotheses on which your algorithms might be based. Some might even use machine learning outputs as signals for potential trades, here too conformal prediction can help.
What are some Conformal Prediction alternatives?
Conformal Prediction can be replaced with some other methods depending on what you’re doing. Here are some of the top 3 alternatives:
- Platt scaling
- Isotonic regression
- Spline calibration
Understanding Conformal Prediction
When it comes to understanding Conformal Prediction, everyone has their preferences some like going into the math behind it and research papers, some like looking at code adaptions, and some like reading guides and tutorials like this one.
I’ll approach this article by explaining it in very crude layman’s terms so that the intuition behind Conformal Prediction can be easily grasped and readily applied to your existing and future projects.
Also, I’ll point you toward resources that feature the other approaches mentioned above in the “Where can I learn more about Conformal Prediction” section.
Conformal Prediction usually “works” in the following way:
- Take a trained ML model
- Create an additional set from the data called the calibration set (unseen by the model)
- Pick an error metric (calibration score) and apply a cutoff point (e.g. 95% quantile cutoff point)
- Use the cutoff point to inform the width of your prediction interval
- Use this cutoff point to form the prediction sets for new examples
- Now, most of your predictions should fall inside the prediction interval
That’s it! You now can adapt this for different approaches and models. For example, if you had an image classifier trained to classify dolphins the prediction interval wouldn’t hold labels of more uncertain classes that are being kept away by the chosen quantile level.
Also, the more uncertain the model is (e.g. the harder the inputs are), the larger the prediction set will be and vice-versa which is exactly what we’re looking for. This can be further informed by the average set size, set spread, and coverage that we’ll explore further in this article.
Finally, the error metric (also known as the calibration score) is a very important part and engineering choice that informs everything else when interpreting the results of conformal prediction.
Let’s go into coding so that we can grasp this further.
What is MAPIE?
MAPIE (Model Agnostic Prediction Interval Estimator) is a Python library that allows you to estimate prediction intervals using any scikit-learn-compatible model for single-output regression or multi-class classification settings. All prediction sets are based on conformal prediction.
How to get started with Conformal Prediction?
To get started with Conformal Prediction, all we need is Python and an IDE such as VS Code, Google Colab, or the like. I’ll go with Google Colab this time and install MAPIE.
!pip install mapie
How to apply Conformal Prediction with MAPIE in Python?
To apply Conformal Prediction with MAPIE in Python, we’ll need to use either the MapieClassification
or MapieRegression
algorithm to obtain the scores that we’ll use for implementing conformal prediction.
Let’s start with a classification example:
How to apply Conformal Prediction for Classification with MAPIE?
To apply Conformal Prediction for classification with MAPIE, we will estimate a prediction set of multiple classes such that the probability of a true label of a new test point is always higher than the target confidence level.
We’ll use the classifier’s softmax score output as the conformity score on the Iris dataset. The classifier will be the Naive Bayes classifier from the scikit-learn
library.
To execute this properly, we’ll need to follow these steps:
- Craft a toy dataset and form train, calibration, and test sets.
- Fit the model on the train set.
- Set the conformal score to be the softmax output of the true class for each sample in the calibration set
- Define as being the quantile of calibration scores (S1,…,Sn) which is
- Form a prediction set for each new test data point (where Xn+1 is known but Yn+1 isn’t) so that it includes all the labels with a sufficiently high softmax output.
Prior to doing the first two steps, we will import the libraries we need:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn import naive_bayes
from mapie import MapieClassifier
from mapie.metrics import classification_coverage_score
Now, the first two steps:
# Create a toy dataset with 2 features and 3 classes (0, 1, 2) with a bit of noise
n_samples = 1500
n_features = 2
n_classes = 3
X = np.random.randn(n_samples, n_features)
y = np.zeros(n_samples)
for i in range(n_classes):
X[y == i] += np.random.randn(1, n_features) * 1.2
y = np.where(X[:, 0] > 0, 0, 1)
y = np.where(X[:, 1] > 0, y, y + 1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
X_train, X_cal, y_train, y_cal = train_test_split(X_train, y_train, test_size=0.1, random_state=42)
# Train a Gaussian Naive Bayes classifier
clf = naive_bayes.GaussianNB()
clf.fit(X_train, y_train)
Visit AlgoTrading101 for additional insight on this topic and to download the scripts: https://algotrading101.com/learn/conformal-prediction-guide/.
Disclosure: Interactive Brokers
Information posted on IBKR Campus that is provided by third-parties does NOT constitute a recommendation that you should contract for the services of that third party. Third-party participants who contribute to IBKR Campus are independent of Interactive Brokers and Interactive Brokers does not make any representations or warranties concerning the services offered, their past or future performance, or the accuracy of the information provided by the third party. Past performance is no guarantee of future results.
This material is from AlgoTrading101 and is being posted with its permission. The views expressed in this material are solely those of the author and/or AlgoTrading101 and Interactive Brokers is not endorsing or recommending any investment or trading discussed in the material. This material is not and should not be construed as an offer to buy or sell any security. It should not be construed as research or investment advice or a recommendation to buy, sell or hold any security or commodity. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.