The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.
In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner.
[2] In practice, this generally requires numerical techniques for some discrete approximation to the exact optimization relationship.
Alternatively, the continuous process can be approximated by a discrete system, which leads to a following recurrence relation analog to the Hamilton–Jacobi–Bellman equation: at the
The dynamic programming approach to solve this problem involves breaking it apart into a sequence of smaller decisions.
The value of any quantity of capital at any previous time can be calculated by backward induction using the Bellman equation.
Intuitively, instead of choosing his whole lifetime plan at birth, the consumer can take things one step at a time.
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems.
If a problem can be solved by combining optimal solutions to non-overlapping sub-problems, the strategy is called "divide and conquer" instead.
Even though the total number of sub-problems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this.
This can be achieved in either of two ways:[4] Some programming languages can automatically memoize the result of a function call with a particular set of arguments, in order to speed up call-by-name evaluation (this mechanism is referred to as call-by-need).
Dynamic programming is widely used in bioinformatics for tasks such as sequence alignment, protein folding, RNA structure prediction and protein-DNA binding.
The first dynamic programming algorithms for protein-DNA binding were developed in the 1970s independently by Charles DeLisi in the US[6] and by Georgii Gurskii and Alexander Zasedatelev in the Soviet Union.
[7] Recently these algorithms have become very popular in bioinformatics and computational biology, particularly in the studies of nucleosome positioning and transcription factor binding.
From a dynamic programming point of view, Dijkstra's algorithm for the shortest path problem is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method.
Using dynamic programming in the calculation of the nth member of the Fibonacci sequence improves its performance greatly.
In larger examples, many more values of fib, or subproblems, are recalculated, leading to an exponential time algorithm.
The resulting function requires only O(n) time instead of exponential time (but requires O(n) space): This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values.
The function f to which memoization is applied maps vectors of n pairs of integers to the number of admissible boards (solutions).
Otherwise, we have an assignment for the top row of the k × n board and recursively compute the number of solutions to the remaining (k − 1) × n board, adding the numbers of solutions for every admissible assignment of the top row and returning the sum, which is being memoized.
However, we can compute it much faster in a bottom-up fashion if we store path costs in a two-dimensional array q[i, j] rather than using a function.
[12] Typically, the problem consists of transforming one sequence into another using edit operations that replace, insert, or remove an element.
The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape.
[13] The following is a description of the instance of this famous puzzle involving N=2 eggs and a building with H=36 floors:[14] To derive a dynamic programming functional equation for this puzzle, let the state of the dynamic programming model be a pair s = (n,k), where For instance, s = (2,6) indicates that two test eggs are available and 6 (consecutive) floors are yet to be tested.
Now, let Then it can be shown that[15] with W(n,0) = 0 for all n > 0 and W(1,k) = k for all k. It is easy to solve this equation iteratively by systematically increasing the values of n and k. Notice that the above solution takes
Unraveling the solution will be recursive, starting from the top and continuing until we reach the base case, i.e. multiplication of single matrices.
Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation.
Let's take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense.
So I used it as an umbrella for my activities.The word dynamic was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive.
"[19] Also, Harold J. Kushner stated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic.