Dynamic Programming In Action
Dynamic programming (DP) actually predates computer programming—here program means a schedule or plan of action, because dynamic programming has applications to scheduling and similar tasks. (Dynamic was chosen, and I'm being serious here, because it sounded good and was something that the US government officials funding its research wouldn't think of as too mathy.)
The idea is pretty simple. Instead of going down from down to and saving results as we go, we go up from and continually fill in our table of values.
We start with
0 | 0 |
1 | 1 |
2 | |
3 | |
... | ... |
and, each step, fill in the next value using the previous two. Our next step makes the table look like this:
0 | 0 |
1 | 1 |
2 | 1 |
3 | |
... | ... |
This will perform exactly the same number of additions as a memoized approach, and is theoretically equivalent. The advantage is that we won't run into any issues with stack size, because we're just storing the answers instead of storing a bunch of data about the function calls.
In classical DP, we store every value. The Fibonacci sequence has the convenient property that it only depends on the previous two values, so we don't actually need to store every value. We can just store the most recent two. This will look something like this in the middle of computing :
... | ... |
20 | 6765 |
21 | 10946 |
22 | 17711 |
... | ... |
Although the number of additions we perform will be the same, in practice this tends to be faster than recursion. Every time you call a function in Python, Python has to look up what code to run in a table of functions, and this cuts out that unnecessary step. Let's see it in action!
Dynamic Programming In Action
Dynamic programming (DP) actually predates computer programming—here program means a schedule or plan of action, because dynamic programming has applications to scheduling and similar tasks. (Dynamic was chosen, and I'm being serious here, because it sounded good and was something that the US government officials funding its research wouldn't think of as too mathy.)
The idea is pretty simple. Instead of going down from down to and saving results as we go, we go up from and continually fill in our table of values.
We start with
0 | 0 |
1 | 1 |
2 | |
3 | |
... | ... |
and, each step, fill in the next value using the previous two. Our next step makes the table look like this:
0 | 0 |
1 | 1 |
2 | 1 |
3 | |
... | ... |
This will perform exactly the same number of additions as a memoized approach, and is theoretically equivalent. The advantage is that we won't run into any issues with stack size, because we're just storing the answers instead of storing a bunch of data about the function calls.
In classical DP, we store every value. The Fibonacci sequence has the convenient property that it only depends on the previous two values, so we don't actually need to store every value. We can just store the most recent two. This will look something like this in the middle of computing :
... | ... |
20 | 6765 |
21 | 10946 |
22 | 17711 |
... | ... |
Although the number of additions we perform will be the same, in practice this tends to be faster than recursion. Every time you call a function in Python, Python has to look up what code to run in a table of functions, and this cuts out that unnecessary step. Let's see it in action!