High-dimensional statistics

In statistical theory, the field of high-dimensional statistics studies data whose dimension is larger (relative to the number of datapoints) than typically considered in classical multivariate analysis.

The area arose owing to the emergence of many modern data sets in which the dimension of the data vectors may be comparable to, or even larger than, the sample size, so that justification for the use of traditional techniques, often based on asymptotic arguments with the dimension held fixed as the sample size increased, was lacking.

[1][2] There are several notions of high-dimensional analysis of statistical methods including: The most basic statistical model for the relationship between a covariate vector

, from this model, we can form the response vector

and the design matrix has full column rank (i.e. its columns are linearly independent), the ordinary least squares estimator of

, and the Gauss-Markov theorem tells us that it is the Best Linear Unbiased Estimator.

may become ill-conditioned, with a small minimum eigenvalue.

will be large (since the trace of a matrix is the sum of its eigenvalues).

It is important to note that the deterioration in estimation performance in high dimensions observed in the previous paragraph is not limited to the ordinary least squares estimator.

In fact, statistical inference in high dimensions is intrinsically hard, a phenomenon known as the curse of dimensionality, and it can be shown that no estimator can do better in a worst-case sense without additional information (see Example 15.10[2]).

Nevertheless, the situation in high-dimensional statistics may not be hopeless when the data possess some low-dimensional structure.

One common assumption for high-dimensional linear regression is that the vector of regression coefficients is sparse, in the sense that most coordinates of

Many statistical procedures, including the Lasso, have been proposed to fit high-dimensional linear models under such sparsity assumptions.

Another example of a high-dimensional statistical phenomenon can be found in the problem of covariance matrix estimation.

draws from some zero mean distribution with an unknown covariance matrix

is the sample covariance matrix In the low-dimensional setting where

, on the other hand, this consistency result may fail to hold.

, respectively, according to the limiting distribution derived by Tracy and Widom, and these clearly deviate from the unit eigenvalues of

Further information on the asymptotic behaviour of the eigenvalues of

From a non-asymptotic point of view, the maximum eigenvalue

[2] Again, additional low-dimensional structure is needed for successful covariance matrix estimation in high dimensions.

Examples of such structures include sparsity, low rankness and bandedness.

From an applied perspective, research in high-dimensional statistics was motivated by the realisation that advances in computing technology had dramatically increased the ability to collect and store data, and that traditional statistical techniques such as those described in the examples above were often ill-equipped to handle the resulting challenges.

Theoretical advances in the area can be traced back to the remarkable result of Charles Stein in 1956,[4] where he proved that the usual estimator of a multivariate normal mean was inadmissible with respect to squared error loss in three or more dimensions.

Indeed, the James-Stein estimator[5] provided the insight that in high-dimensional settings, one may obtain improved estimation performance through shrinkage, which reduces variance at the expense of introducing a small amount of bias.

This bias-variance tradeoff was further exploited in the context of high-dimensional linear models by Hoerl and Kennard in 1970 with the introduction of ridge regression.

[6] Another major impetus for the field was provided by Robert Tibshirani's work on the Lasso in 1996, which used

regularisation to achieve simultaneous model selection and parameter estimation in high-dimensional sparse linear regression.

[7] Since then, a large number of other shrinkage estimators have been proposed to exploit different low-dimensional structures in a wide range of high-dimensional statistical problems.

The following are examples of topics that have received considerable attention in the high-dimensional statistics literature in recent years:

Illustration of the linear model in high-dimensions: a data set consists of a response vector and a design matrix with . Our goal is to estimate the unknown vector of regression coefficients where is often assumed to be sparse , in the sense that the cardinality of the set is small by comparison with .