Generalized method of moments

Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the data's distribution function may not be known, and therefore maximum likelihood estimation is not applicable.

The method requires that a certain number of moment conditions be specified for the model.

The GMM method then minimizes a certain norm of the sample averages of the moment conditions, and can therefore be thought of as a special case of minimum-distance estimation.

GMM were advocated by Lars Peter Hansen in 1982 as a generalization of the method of moments,[2] introduced by Karl Pearson in 1894.

A general assumption of GMM is that the data Yt be generated by a weakly stationary ergodic stochastic process.

In order to apply GMM, we need to have "moment conditions", that is, we need to know a vector-valued function g(Y,θ) such that where E denotes expectation, and Yt is a generic observation.

The basic idea behind GMM is to replace the theoretical expected value E[⋅] with its empirical analog—sample average: and then to minimize the norm of this expression with respect to θ.

The properties of the resulting estimator will depend on the particular choice of the norm function, and therefore the theory of GMM considers an entire family of norms, defined as where W is a positive-definite weighting matrix, and

In practice, the weighting matrix W is computed based on the available data set, which will be denoted as

Consistency is a statistical property of an estimator stating that, having a sufficient number of observations, the estimator will converge in probability to the true value of parameter: Sufficient conditions for a GMM estimator to be consistent are as follows: The second condition here (so-called Global identification condition) is often particularly hard to verify.

There exist simpler necessary but not sufficient conditions, which may be used to detect non-identification problem: In practice applied econometricians often simply assume that global identification holds, without actually proving it.

[3]: 2127 Asymptotic normality is a useful property, as it allows us to construct confidence bands for the estimator, and conduct different tests.

Before we can make a statement about the asymptotic distribution of the GMM estimator, we need to define two auxiliary matrices: Then under conditions 1–6 listed below, the GMM estimator will be asymptotically normal with limiting distribution:

Only infinite number of orthogonal conditions obtains the smallest variance, the Cramér–Rao bound.

In this case the formula for the asymptotic distribution of the GMM estimator simplifies to The proof that such a choice of weighting matrix is indeed locally optimal is often adopted with slight modifications when establishing efficiency of other estimators.

One difficulty with implementing the outlined method is that we cannot take W = Ω−1 because, by the definition of matrix Ω, we need to know the value of θ0 in order to compute this matrix, and θ0 is precisely the quantity we do not know and are trying to estimate in the first place.

In the case of Yt being iid we can estimate W as Several approaches exist to deal with this issue, the first one being the most popular: Another important issue in implementation of minimization procedure is that the function is supposed to search through (possibly high-dimensional) parameter space Θ and find the value of θ which minimizes the objective function.

No generic recommendation for such procedure exists, it is a subject of its own field, numerical optimization.

When the number of moment conditions is greater than the dimension of the parameter vector θ, the model is said to be over-identified.

Sargan (1958) proposed tests for over-identifying restrictions based on instrumental variables estimators that are distributed in large samples as Chi-square variables with degrees of freedom that depend on the number of over-identifying restrictions.

Subsequently, Hansen (1982) applied this test to the mathematically equivalent formulation of GMM estimators.

Note, however, that such statistics can be negative in empirical applications where the models are misspecified, and likelihood ratio tests can yield insights since the models are estimated under both null and alternative hypotheses (Bhargava and Sargan, 1983).

The GMM method has then replaced the problem of solving the equation

, the following so-called J-statistic is asymptotically chi-squared distributed with k–l degrees of freedom.

, the efficient weighting matrix (note that previously we only required that W be proportional to

for estimator to be efficient; however in order to conduct the J-test W must be exactly equal to

, the J-statistic is asymptotically unbounded: To conduct the test we compute the value of J from the data.

distribution: Many other popular estimation techniques can be cast in terms of GMM optimization: In method of moments, an alternative to the original (non-generalized) Method of Moments (MoM) is described, and references to some applications and a list of theoretical advantages and disadvantages relative to the traditional method are provided.

This Bayesian-Like MoM (BL-MoM) is distinct from all the related methods described above, which are subsumed by the GMM.

[5][6] The literature does not contain a direct comparison between the GMM and the BL-MoM in specific applications.