“Dude! Can you count this money?” Assuming the money given consists completely of 100 rupee notes, let’s say it summed up to 1900. Now, I have given you a one hundred rupee note. How much is it now, in total? 2000? Did you count all the money again or just added one hundred to the existing sum? My assumption is you would have added one! How did you do this? You have memorized the earlier sum right!? Yes, this what you have to memorize for Dynamic Programming! Not 2000, but “the memorizing” concept. Let’s see how and why!
Formal Definition:
Dynamic Programming is a technique of obtaining an optimal solution to a problem by breaking down it into further subproblems. It utilizes the fact that the optimal solutions for the subproblems eventually end up giving an optimal solution for the bigger problem.
Dynamic programming uses the principle of optimality which states that the optimal solution to a problem can be found using a sequence of decisions. Greedy method use a particular method or rule to obtain a solution but dynamic programming utilizes this optimality principle and takes decisions at every step of execution.
Isn’t this clear? Now we will break this bigger problem of understanding into some simpler concepts.
Prerequisites Zone:
Fibonacci series: A series of terms that are obtained from the sum of the previous two terms. The first and second terms are always 0 and 1.
Series: 0, 1, 1 (0+1), 2 (1+1), 3 (1+2), 5 (2+3), …
Suppose I have to write a code to find the Fibonacci series to 5 terms. There are many ways to write it and we will see some of them parallelly with the concepts of Dynamic Programming.
Characteristics of Dynamic Programming:
Before we actually look into the solving methods of Dynamic programming (DP), let’s see how we can identify if a problem can be solved using DP.
Overlapping subproblems:
We have earlier discussed that a bigger problem can be further sub-divided into smaller problems. While dividing into subproblems, we can encounter that the same subproblem shows up. These multiple occurrences of the same subproblem are known as Overlapping subproblems.

From the Figure 1, we can observe that fib(2), fib(3), fib(0), and fib(1) are called and evaluated multiple times. There exist overlapping subproblems here.
Optimal substructure Property:
A problem is said to be possessing this property if and only if the optimal solution of the problem can be obtained from the optimal solutions of its subproblems.
While implementing the code in a recursive manner, the function is decomposed as follows:
Fib(n-2) + Fib(n-1)
The optimal solution for the above subproblem can provide the optimal solution for the bigger problem.
Recursive way of solving Fibonacci Series Not DP
#Code in python3
def Fibonacci(n):
if n < 2:
return n
return Fibonacci(n - 1) + Fibonacci(n - 2) #recursion to obtain solution
def main():
print("Following is the Fibonacci series: ")
for i in range(6):
print(Fibonacci(i), end=" ")
main()
Methods of Dynamic Programming:
Top-down with Memoization:
Did you read it as Memorization or thought it was a mistake? Well, that kind of thought makes the concept easy to remember (but both the terms are verbally different). As we have already discussed, we shall solve the problem by dividing it into more subproblems. If you have observed, there are overlapping subproblems that can be resolved here.
Solving using Memoization- DP
#Code in python3
def calculateFibonacci(n):
memoize = [-1 for x in range(n+1)] #list to cache the result
return Fibonacci(memoize, n)
def Fibonacci(memoize, n):
if n < 2:
return n
# if we have already solved this subproblem, simply return the result from the cache
if memoize[n] >= 0:
return memoize[n]
memoize[n] = Fibonacci(memoize, n - 1) + Fibonacci(memoize, n - 2) #recursion to solve the problem
return memoize[n]
def main():
for i in range(6):
print(calculateFibonacci(i), end=" ")
main()
We use recursion to find the solution, as in the above code. Whenever we solve a subproblem, we cache the result in an array/list so that it can be used when the same subproblem is encountered. This reduces the time complexity and produces an optimal solution.
Bottom-up with Tabulation:
Tabulation is the opposite of memoization and avoids recursion. This is done by filling up a table. Based on the results in the table, the solution to the top problem is obtained which is not the case in Memoization. We solve the top problem first and then recurse down to subproblems. This is why memoization is top-down and tabulation is bottom-up.
Tabulation Method- DP
#code in python3
def Fibonacci(n):
Table = [0, 1] #table to store the result
for i in range(2, n + 1): #iterative method to obtain the results
Table.append(Table[i - 1] + Table[i - 2])
return Table[n]
def main():
for i in range(6):
print(Fibonacci(i), end=" ")
main()
We use the iterative method to find the results of subproblems, as in the above code. We will now apply this approach to the Fibonacci series. We know that the terms in the series are the sum of the previous two terms.
Differences between Memoization and Tabulation:
| Memoization | Tabulation |
| Easy to execute and is less complex. | Complex to execute, comparatively. |
| Entries are filled in the lookup list on demand. | All the entries in the table are filled one by one, starting from the first. |
| It has the advantage when all the subproblems need not be solved. | If all the subproblems are to be solved at least once, tabulation is preferable. |
| Comparatively slow as recursion has to be performed many times. | Fast as the result can directly be accessed from the table. |
Memoization: It is the process of storing a value by calculating it and remembering it when required.
Memorization: It is the process of calculating something ahead of time and remembering whenever required. Let’s understand with an example.
If I ask you the value of 212, you will say it as 4096, if you have memorized the value. If you calculate it slowly and tell the answer, you have done memoization. Now, you can remember the value when required.
4 thoughts on “Dynamic Programming”