Rather than assuming observed choices are the result of static utility maximization, observed choices in DDC models are assumed to result from an agent's maximization of the present value of utility, generalizing the utility theory upon which discrete choice models are based.
[1] The goal of DDC methods is to estimate the structural parameters of the agent's decision process.
Once these parameters are known, the researcher can then use the estimates to simulate how the agent would behave in a counterfactual state of the world.
(For example, how a prospective college student's enrollment decision would change in response to a tuition increase.)
's maximization problem can be written mathematically as follows: where It is standard to impose the following simplifying assumptions and notation of the dynamic decision problem: The flow utility can be written as an additive sum, consisting of deterministic and stochastic elements.
The deterministic component can be written as a linear function of the structural parameters.
The expectation over state transitions is accomplished by taking the integral over this probability distribution.
Writing the conditional value function in this way is useful in constructing formulas for the choice probabilities.
To write down the choice probabilities, the researcher must make an assumption about the distribution of the
is multinomial logit (i.e. drawn iid from the Type I extreme value distribution), the formulas for the choice probabilities would be: Estimation of dynamic discrete choice models is particularly challenging, due to the fact that the researcher must solve the backwards recursion problem for each guess of the structural parameters.
Different solution methods can be employed due to complexity of the problem.
The foremost example of a full-solution method is the nested fixed point (NFXP) algorithm developed by John Rust in 1987.
[2] The NFXP algorithm is described in great detail in its documentation manual.
[3] A recent work by Che-Lin Su and Kenneth Judd in 2012[4] implements another approach (dismissed as intractable by Rust in 1987), which uses constrained optimization of the likelihood function, a special case of mathematical programming with equilibrium constraints (MPEC).
Once it is solved, both the structural parameters that maximize the likelihood, and the solution of the model are found.
In the later article[5] Rust and coauthors show that the speed advantage of MPEC compared to NFXP is not significant.
Yet, because the computations required by MPEC do not rely on the structure of the model, its implementation is much less labor intensive.
The leading non-solution method is conditional choice probabilities, developed by V. Joseph Hotz and Robert A.
[6] The bus engine replacement model developed in the seminal paper Rust (1987) is one of the first dynamic stochastic models of discrete choice estimated using real data, and continues to serve as classical example of the problems of this type.
[4] The model is a simple regenerative optimal stopping stochastic dynamic problem faced by the decision maker, Harold Zurcher, superintendent of maintenance at the Madison Metropolitan Bus Company in Madison, Wisconsin.
represent the component of the utility observed by Harold Zurcher, but not John Rust.
are respectively transition densities for the observed and unobserved states variables.
represent data on state variables (odometer readings) and decision (keep or replace) for
The joint algorithm for solving the fixed point problem given a particular value of parameter
was named by John Rust nested fixed point algorithm (NFXP).
Rust's implementation of the nested fixed point algorithm is highly optimized for this problem, using Newton–Kantorovich iterations to calculate
The MPEC method instead solves the constrained optimization problem:[4] This method is faster to compute than non-optimized implementations of the nested fixed point algorithm, and takes about as long as highly optimized implementations.
[5] The conditional choice probabilities method of Hotz and Miller can be applied in this setting.
Hotz, Miller, Sanders, and Smith proposed a computationally simpler version of the method, and tested it on a study of the bus engine replacement problem.
The method works by estimating conditional choice probabilities using simulation, then backing out the implied differences in value functions.