A transductive version of CP was first proposed in 1998 by Gammerman, Vovk, and Vapnik,[1] and since, several variants of conformal prediction have been developed with different computational complexities, formal guarantees, and practical applications.
Depending on how good the underlying model is (how well it can discern between cats, dogs and other animals) and the specified significance level, these sets can be smaller or larger.
Conformal classifiers instead compute and output the p-value for each available class by performing a ranking of the nonconformity measure (α-value) of the test object against examples from the training data set.
Similar to standard hypothesis testing, the p-value together with a threshold (referred to as significance level in the CP field) is used to determine whether the label should be in the prediction set.
For example, for a significance level of 0.1, all classes with a p-value of 0.1 or greater are added to the prediction set.
[10] In MICP, the alpha values are class-dependent (Mondrian) and the underlying model does not follow the original online setting introduced in 2005.
If the split is performed randomly and that data is exchangeable, the ICP model is proven to be automatically valid (i.e. the error rate corresponds to the required significance level).
Studies have shown that it can be applied to for example convolutional neural networks,[11] support-vector machines and others.
For example, in biotechnology it has been used to predict uncertainties in breast cancer,[12] stroke risks,[13] data storage,[14] and disk drive scrubbing.
[17] It has been hosted in several different European countries including Greece, Great Britain, Italy and Sweden.