Because computation is postponed until a new instance is observed, these algorithms are sometimes referred to as "lazy.
"[2] It is called instance-based because it constructs hypotheses directly from the training instances themselves.
[3] This means that the hypothesis complexity can grow with the data:[3] in the worst case, a hypothesis is a list of n training items and the computational complexity of classifying a single new instance is O(n).
Examples of instance-based learning algorithms are the k-nearest neighbors algorithm, kernel machines and RBF networks.[2]: ch.
To battle the memory complexity of storing all training instances, as well as the risk of overfitting to noise in the training set, instance reduction algorithms have been proposed.