Line search

In optimization, line search is a basic iterative approach to find a local minimum

There are several ways to find an (approximate) minimum point in this case.

[1]: sec.5 Zero-order methods use only function evaluations (i.e., a value oracle) - not derivatives:[1]: sec.5 Zero-order methods are very general - they do not assume differentiability or even continuity.

First-order methods assume that f is continuously differentiable, and that we can evaluate not only f but also its derivative.

[1]: sec.5 Curve-fitting methods try to attain superlinear convergence by assuming that f has some analytic form, e.g. a polynomial of finite degree.

At each iteration, there is a set of "working points" in which we know the value of f (and possibly also its derivative).

Based on these points, we can compute a polynomial that fits the known values, and find its minimum analytically.

The minimum point becomes a new working point, and we proceed to the next iteration:[1]: sec.5 Curve-fitting methods have superlinear convergence when started close enough to the local minimum, but might diverge otherwise.

The line-search method first finds a descent direction along which the objective function

It can also be solved loosely, by asking for a sufficient decrease in h that does not necessarily approximate the optimum.

The latter is called inexact line search and may be performed in a number of ways, such as a backtracking line search or using the Wolfe conditions.

Like other optimization methods, line search may be combined with simulated annealing to allow it to jump over some local minima.

Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.