[1][2][3][4] They are linear combinations of order statistics (L-statistics) analogous to conventional moments, and can be used to calculate quantities analogous to standard deviation, skewness and kurtosis, termed the L-scale, L-skewness and L-kurtosis respectively (the L-mean is identical to the conventional mean).
Just as for conventional moments, a theoretical distribution has a set of population L-moments.
For a random variable X, the rth population L-moment is[1] where Xk:n denotes the kth order statistic (kth smallest value) in an independent sample of size n from the distribution of X and
In particular, the first four population L-moments are Note that the coefficients of the rth L-moment are the same as in the rth term of the binomial transform, as used in the r-order finite difference (finite analog to the derivative).
The first two of these L-moments have conventional names: The L-scale is equal to half the Mean absolute difference.
A closer connection can be found in terms of cumulative distribution functions (CDFs), since these (see this section) satisfy
This integral can often be made more tractable by introducing the quantile function
has the form of a generalised Fourier coefficient, and they appeared as such in the literature years before being named moments.
In the notation of this article, Sillitto[6] proved However Hosking[1] cautions that partial sums of this series tend to give poor approximations for the tails of the distribution, and need not be monotonic.
Direct estimators for the first four L-moments in a finite sample of n observations are:[7] where x(i) is the ith order statistic and
Sample L-moments can also be defined indirectly in terms of probability weighted moments,[1][8][9] which leads to a more efficient algorithm for their computation.
Tighter bounds can be found for some specific L-moment ratios; in particular, the L-kurtosis
lies in [ −+ 1 /4, 1 ) , and A quantity analogous to the coefficient of variation, but based on L-moments, can also be defined:
For a non-negative random variable, this lies in the interval ( 0, 1 ) [1] and is identical to the Gini coefficient.
[11] L-moments are statistical quantities that are derived from probability weighted moments[12] (PWM) which were defined earlier (1979).
[8] PWM are used to efficiently estimate the parameters of distributions expressable in inverse form such as the Gumbel,[9] the Tukey lambda, and the Wakeby distributions.
There are two common ways that L-moments are used, in both cases analogously to the conventional moments: In addition to doing these with standard moments, the latter (estimation) is more commonly done using maximum likelihood methods; however using L-moments provides a number of advantages.
One disadvantage of L-moment ratios for estimation is their typically smaller sensitivity.
For instance, the Laplace distribution has a kurtosis of 6 and weak exponential tails, but a larger 4th L-moment ratio than e.g. the student-t distribution with d.f.=3, which has an infinite kurtosis and much heavier tails.
Consequently, L-moments are far more meaningful when dealing with outliers in data than conventional moments.
However, there are also other better suited methods to achieve an even higher robustness than just replacing moments by L-moments.
One example of this is using L-moments as summary statistics in extreme value theory (EVT).
This application shows the limited robustness of L-moments, i.e. L-statistics are not resistant statistics, as a single extreme value can throw them off, but because they are only linear (not higher-order statistics), they are less affected by extreme values than conventional moments.
Another advantage L-moments have over conventional moments is that their existence only requires the random variable to have finite mean, so the L-moments exist even if the higher conventional moments do not exist (for example, for Student's t distribution with low degrees of freedom).
[1] Some appearances of L-moments in the statistical literature include the book by David & Nagaraja (2003, Section 9.9)[13] and a number of papers.
[11][14][15][16][17][18] A number of favourable comparisons of L-moments with ordinary moments have been reported.
[19][20] The table below gives expressions for the first two L moments and numerical values of the first two L-moment ratios of some common continuous probability distributions with constant L-moment ratios.
[1][5] More complex expressions have been derived for some further distributions for which the L-moment ratios vary with one or more of the distributional parameters, including the log-normal, Gamma, generalized Pareto, generalized extreme value, and generalized logistic distributions.
[1] The notation for the parameters of each distribution is the same as that used in the linked article.
In the expression for the mean of the Gumbel distribution, γe is the Euler–Mascheroni constant 0.5772 1566 4901 ... .