If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory.
[1] Trajectory optimization first showed up in 1697, with the introduction of the Brachystochrone problem: find the shape of a wire such that a bead sliding along it will move between two points in the minimum time.
The first optimal control approaches grew out of the calculus of variations, based on the research of Gilbert Ames Bliss and Bryson[3] in America, and Pontryagin[4] in Russia.
Much of the early work in trajectory optimization was focused on computing rocket thrust profiles, both in a vacuum and in the atmosphere.
In these situations, the pilot followed a Mach versus altitude schedule based on optimal control solutions.
One of the important early problems in trajectory optimization was that of the singular arc, where Pontryagin's maximum principle fails to yield a complete solution.
An example of a problem with singular control is the optimization of the thrust of a missile flying at a constant altitude and which is launched at low speed.
This solution is the foundation of the boost-sustain rocket motor profile widely used today to maximize missile performance.
There are a wide variety of applications for trajectory optimization, primarily in robotics: industry, manipulation, walking, path-planning, and aerospace.
[6][7] One interesting application shown by the U.Penn GRASP Lab is computing a trajectory that allows a quadrotor to fly through a hoop as it is thrown.
Another, this time by the ETH Zurich Flying Machine Arena, involves two quadrotors tossing a pole back and forth between them, with it balanced like an inverted pendulum.
[8] Trajectory optimization is used in manufacturing, particularly for controlling chemical processes[9] or computing the desired path for robotic manipulators.
[13] Finally, trajectory optimization can be used for path-planning of robots with complicated dynamics constraints, using reduced complexity models.
The basic idea is similar to how you would aim a cannon: pick a set of parameters for the trajectory, simulate the entire thing, and then check to see if you hit the target.
[20][21] In pseudospectral discretization the entire trajectory is represented by a collection of basis functions in the time domain (independent variable).
[22][23][24] When used to solve a trajectory optimization problem whose solution is smooth, a pseudospectral method will achieve spectral (exponential) convergence.
[26][27] In 1990 Dewey H. Hodges and Robert R. Bless[28] proposed a weak Hamiltonian finite element method for optimal control problems.
Thus given easily to sample random noise as input, the diffusion process will recover a plausible corresponding noise-free data point.
Recent methods[30][31] have parameterized trajectories as matrices of state-action pairs at consecutive time steps and trained a diffusion model to generate such a matrix.
When solving a trajectory optimization problem with an indirect method, you must explicitly construct the adjoint equations and their gradients.
When constructing the adjoint equations for an indirect method, the user must explicitly write down when the constraint is active in the solution, which is difficult to know a priori.
One solution is to use a direct method to compute an initial guess, which is then used to construct a multi-phase problem where the constraint is prescribed.
Orthogonal collocation methods are best for obtaining high-accuracy solutions to problems where the accuracy of the control trajectory is important.