The Huber loss function describes the penalty incurred by an estimation procedure f. Huber (1964) defines the loss function piecewise by[1] This function is quadratic for small values of a, and linear for large values, with equal values and slopes of the different sections at the two points where
The variable a often refers to the residuals, that is to the difference between the observed and predicted values
, so the former can be expanded to[2] The Huber loss is the convolution of the absolute value function with the rectangular function, scaled and translated.
The squared loss function results in an arithmetic mean-unbiased estimator, and the absolute-value loss function results in a median-unbiased estimator (in the one-dimensional case, and a geometric median-unbiased estimator for the multi-dimensional case).
The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of
-values when the distribution is heavy tailed: in terms of estimation theory, the asymptotic relative efficiency of the mean is poor for heavy-tailed distributions.
As defined above, the Huber loss function is strongly convex in a uniform neighborhood of its minimum
; at the boundary of this uniform neighborhood, the Huber loss function has a differentiable extension to an affine function at points
These properties allow it to combine much of the sensitivity of the mean-unbiased, minimum-variance estimator of the mean (using the quadratic loss function) and the robustness of the median-unbiased estimator (using the absolute value function).
It combines the best properties of L2 squared loss and L1 absolute loss by being strongly convex when close to the target/minimum and less steep for extreme values.
The scale at which the Pseudo-Huber loss function transitions from L2 loss for values close to the minimum to L1 loss for extreme values and the steepness at extreme values can be controlled by the
The Pseudo-Huber loss function ensures that derivatives are continuous for all degrees.
, and approximates a straight line with slope
While the above is the most common form, other smooth approximations of the Huber loss function also exist.
(a real-valued classifier score) and a true binary class label
, the modified Huber loss is defined as[6] The term
is the hinge loss used by support vector machines; the quadratically smoothed hinge loss is a generalization of
[6] The Huber loss function is used in robust statistics, M-estimation and additive modelling.