Many machine learning systems involve making predictions. The predictions are made by looking at other dependent variables in the same device. In some critical systems, it is very important that the predictions are correct. However, traditional machine learning algorithms doesn’t have a good way to make a trustworthy verification.

One area where the common validation precedures usually fails is that it tends to be biased. An example of this is that the error frequency on production data does not correspond to the error frequency on test data. Another fault of the traditional validation procedures is that it usually does verification on a macroscopic, per model, level. Wouldn’t it make more sense to do this on a microscopic level, per prediction? Otherwise, how could we verify the individual predictions?

The conformal prediction framework offers an alternative method for constructing and evaluating predictive models. Conformal prediction is better suited than traditional predictive methods in applications where thorough verification is needed.

Conformal Prediction

Traditional predictive models output so-called point predictions. A point prediction is a single-valued best-guess prediction for the value we want to predict. The temperature tomorrow will probably be 23°C.

Conformal predictors, on the other hand, output multi-valued prediction regions that represent a range of likely values for the prediction. And the given range is well anchored with a very specific, statistically valid, expectation. The probability of the prediction region containing the ground-truth value of the dependent variable is fixed and known. The temperature tomorrow will probably be 23°C, but I’m 99.9% sure it will not be colder than 18.2°C, nor hotter than 25.6°C.

Under these conditions, model and prediction verification become straight-forward. Each prediction is guaranteed to contain the correct value of the dependent variable with a user-specified probability.

Video Tutorial – Conformal Prediction with Henrik Linusson, Data Scientist, Ekkono Solutions

Statistical guarantees

Unlike, e.g., Bayesian methods, conformal predictors are able to produce predictions on this form without any knowledge of the data or its parameters. The only requirement is that the observed data sequence is exchangeable (which is a looser requirement than the typical i.i.d. assumption). In addition, the conformal prediction framework is model agnostic. It can be applied on top of an arbitrary classification or regression machine learning algorithm. It can also transform the point predictions of the underlying algorithm into prediction regions that exhibit the expected statistical guarantees.

See all research areas.