Methods for conformal/probabilistic prediction (Conformal Predictors, Venn-Predictors) are wrappers around Machine Learning algorithms, that provide guarantees about their predictions; generally, their sole assumption is exchangeability on data (weaker than standard i.i.d.). They were first proposed in the book “Algorithmic Learning in a Random World” (Vovk, Gammerman, Shafer; 2005).

Conformal Predictors (CP) allow limiting the errors committed by a learning algorithm (“underlying algorithm”), in a multi-label classification setting, to a desired significance level $\varepsilon$: their accuracy is guaranteed to be at least $1-\varepsilon$.

Venn-Predictors (VP) output a set of probability distributions on the labels, as a prediction for a new object $x$; one of these distributions is guaranteed to be perfectly calibrated.

## Projects

A list of works on CP.

1. Majority vote ensembles of conformal predictors Machine Learning 2018 [PDF]
2. Exchangeability martingales for selecting features in anomaly detection In Proceedings of the Seventh Workshop on Conformal and Probabilistic Prediction and Applications 2018 [PDF] [Code]
3. Hidden Markov Models with Confidence In Conformal and Probabilistic Prediction with Applications - 5th International Symposium, COPA 2016, Madrid, Spain, April 20-22, 2016, Proceedings 2016 [Slides]
4. Conformal Clustering and Its Application to Botnet Traffic In Statistical Learning and Data Sciences - Third International Symposium, SLDS 2015, Egham, UK, April 20-23, 2015, Proceedings 2015 [Slides]