In the theory of Probably Approximately Correct Machine Learning, the Natarajan dimension characterizes the complexity of learning a set of functions, generalizing from the Vapnik-Chervonenkis dimension for boolean functions to multi-class functions.
Originally introduced as the Generalized Dimension by Natarajan,[1] it was subsequently renamed the Natarajan Dimension by Haussler and Long.
be a set of functions from a set
shatters a set
if there exist two functions
{\displaystyle x\in C-B,h(x)=f_{1}(x)}
The Natarajan dimension of H is the maximal cardinality of a set shattered by
, the Natarajan dimension collapses to the Vapnik Chervonenkis dimension.
Shalev-Shwartz and Ben-David [3] present comprehensive material on multi-class learning and the Natarajan dimension, including uniform convergence and learnability.