Calibration (statistics)

"[2] Calibration in classification means transforming classifier scores into class membership probabilities.

A variety of metrics exist that are aimed to measure the extent to which a classifier produces well-calibrated probabilities.

[4] Into the 2020s, variants include the Adaptive Calibration Error (ACE) and the Test-based Calibration Error (TCE), which address limitations of the ECE metric that may arise when classifier scores concentrate on narrow subset of the [0,1] range.

This framework aims to overcome some of the theoretical and interpretative limitations of existing calibration metrics.

The following multivariate calibration methods exist for transforming classifier scores into class membership probabilities in the case with classes count greater than two: One example is that of dating objects, using observable evidence such as tree rings for dendrochronology or carbon-14 for radiometric dating.