HomeBlogProgrammingA Comprehensive Guide to Dynamic Programming

A Comprehensive Guide to Dynamic Programming

Published
14th Jul, 2023
Views
view count loader
Read it in
12 Mins
A Comprehensive Guide to Dynamic Programming

Using the divide and conquer approach, one can succeed at practically anything. The same principle applies in the world of programming as well. Plentiful problems can be broken down into smaller variations of the problem and solved individually to find the solution to the original problem. 

In this blog, we are going to learn what dynamic programming (DP) is, how to get started, and its applications.  But before we plunge into dynamic coding, most newbies make a common mistake: jumping right into advanced algorithms like dynamic programming languages like PHP, Ruby, VBScript, and more without having good knowledge of core programming concepts. For implementation, you can use any programming language, like python/java/C++/JavaScript. You can find a variety of Programming courses to build your foundation.  

What is Dynamic Programming

Dynamic programming is a technique of breaking down a problem into smaller problems, solving each sub-problems once, storing the solutions of these sub-problems, and eventually finding a solution to the original problem. 

We break down a big problem into smaller problems. Typically, the smaller problems are similar to the parent problem only difference being the scale. Thus, these sub-problems can also be divided further smaller sub-problems until we achieve problems that cannot be further divided. You can imagine we have a tree of a problem and their sub-problems. We start with solving the “leaf” level problems and then move on to their “parent” problems and so on. We save the results as we solve sub-problems for future reference. Thereby avoiding working on the same sub-problem if encountered again. 

This approach is like the divide and conquers algorithm where a problem is divided into sub-problems and recursively solving sub-problems and combining their solution to find the solution to the real problem.

Dynamic Programming Characteristics

It is important to know when to use dynamic programming algorithms. There are two major characteristics to identify whether dynamic programming is the right fit. 

1. Optimal Substructure  

The problem should have optimal substructure properties. It means that the optimal solution can be evaluated from the optimal solutions of its sub-problems. This will also help you define the base case of the recursive algorithm. 

Consider an example of the Fibonacci series. We define the nth number as the sum of the previous 2 numbers. 

2. Fib(n) = Fib(n-1) + Fib(n-2)  

We can see that a problem of size “n” can be broken down into sub-problems of size “n-1” and “n-2”. We also know solutions of base cases, i.e., f(0) as 0 and f(1) 1.   as 1.   

3. Overlapping subproblems 

The other necessary property is overlapping sub-problems. A problem is said to have overlapping sub-problem properties if the sub-problems can be seen recursively visiting the same sub-problems. In such cases, we can improve the performance of an algorithm by storing the results of each sub-problem once it is calculated.

Fibonacci dynamic programming

As seen above, in the case of Fibonacci dynamic programming numbers tree representation, several sub-problems like fib(4), fib(3), fib(2), and so on can be seen occurring multiple times. 

Note that both optimal substructure and overlapping sub-problems dynamic programming patterns are required for a problem to be a dynamic programming problem. 

Example of Dynamic Programming

One can easily find a lot of dynamic programming examples on the internet. We will discuss one of the popular examples here. 

Consider a rod of length n inches and an array of prices that includes prices of all pieces of size smaller than n. We need to determine the maximum sum of money we can make by cutting up the rod and selling the pieces.  

length   | 1   2   3 

-------------------- 

price    | 1   5   8 

With the above set of prices, if the length of the rod is 4, we can get a maximum value of 10 by cutting the rod into two pieces of length 2. 

The image below shows that the problem can be broken down into smaller sub-problems, which can further be broken down into smaller sub-problems. We also know the solution of the base case, i.e., the price of length 0 is 0.  This depicts the property of optimal substructure. 

We can also see that the same sub-problems (highlighted in color) are being repeated. This confirms that the problem has an overlapping sub-problem characteristic.

dynamic programming examples
Source

To solve this problem, we divide the rod of length n into two parts: i and n-i. We repeat this process for the second part and divide n-i further in the same fashion. We store the maximum profit for each length i of the rod. In the end, the maximum of all values will be the expected value. 

Here is a code snippet in java. This gives you an idea about the implementation of dynamic programming in java. 

    public static int rodCut(int[] price, int n) 
    { 
        // `T[i]` stores the maximum profit achieved from a rod of length `i` 
        int[] T = new int[n + 1]; 
  
        // consider a rod of length `i` 
        for (int i = 1; i <= n; i++) 
        { 
            // divide the rod of length `i` into two rods of length `j` 
            // and `i-j` each and take maximum 
            for (int j = 1; j <= i; j++) { 
                T[i] = Integer.max(T[i], price[j - 1] + T[i - j]); 
            } 
        } 
  
        // `T[n]` stores the maximum profit achieved from a rod of length `n` 
        return T[n]; 
    } 

Dynamic Programming Techniques

There are two dynamic programming methods of implementation. 

Top-Down Approach

This approach solves the bigger problem by recursively solving smaller sub-problems. As we solve the sub-problems, we store the result for later use. This way, we don’t need to solve the same sub-problem more than once. This method of saving the intermediate results is called Memoization (not memorization). 

Bottom-Up Approach

The bottom-up method is an iterative version of the top-down approach. This approach starts with the smallest and works upwards to the largest sub-problems. Thus when solving a particular sub-problem, we already have results of smaller dependent sub-problems. The results are stored in an n-dimensional (n=>0) table. Thus, you can imagine when we arrive at the original problem, we have solved all its sub-problems. Now we just use the result set to find the best solution. This method is called Tabulation. 

Which one is better?

  • The top-down approach is typically recursive. It has the overhead of recursive calls and therefore is slower than the bottom-up approach. 
  • One might find the top-down approach easier to implement because we use an array of some sort of lookup table to store results during recursion. While for the bottom-up approach we need to define the order of iteration and define an n-dimensional table for storing results. 
  • The top-down approach might also run into stack overflow conditions in the case of a very deep recursion tree. 

Steps to Solve Dynamic Programming Problems

We will understand the steps with a popular example: The coin change problem with dynamic programming.  

You are given coins of varying denominations and asked to pay a certain amount with the fewest coins possible. How do you write a program for this? 

1. Identify the sub-problem and write it down in words 

Start by defining the problem statement in programmable constructs. 

There is an array of coins with varying denominations and an integer sum representing the total amount of money. We need to return the fewest coins (values from the array) required to make up that sum. If that sum cannot be accumulated with given denominations, return -1. We will assume that infinite coins are available for the given denominations. 

Now we break down the problem into smaller variations. Start with assuming real values for which you know the solution. For example, if the sum is 40 and the denominations are {1, 5, 10, 25}. If you work it out on paper, you can see that you need three coins: 25, 10, and 5. There are other possibilities, but incorrect, solutions like {5, 5, 5, 5, 5, 5, 5, 5}, {10, 10, 10, 5, 5} and so on. 

To find the sub-problem, we can see that the sum of two numbers can express any amount. These numbers can be further expressed as the sum of two numbers.  

The smallest number, 1, is present in the denomination. So any number n can be expressed as 1 + (– 1).   

2. Sub-problem expressed as Mathematical recurrence 

In the above case, the sub-problem can be expressed as. case, sub-problem can be expressed as. 

min_coins(40) = min_coins(40 — c) + 1 

Where c is the number of the allowed denomination. 

This equation can be made generic by replacing 40 with n. 

min_coins(n) = min_coins(n — c) + 1  

3. Define memoization array strategy to fill it 

We know that the problem has characteristics of overlapping sub-problems. We can use the memoization technique to cache the results of sub-problems for later use. 

In this case, we can simply use an array of lengths as the given amount. We will store the minimum coins required for a particular sub-amount, an index of the same value. This makes it easier to fetch the result when required. 

4. Coding the solution 

While coding the algorithm, one can start with the initialization of the array (or cache) if required. Next, one should set the base case. Each problem can be solved in multiple ways using the dynamic programming approach. You need to think about which one suits you. 

Below is the dynamic programming python implementation of the above-discussed problem. However, dynamic programming algorithms can be implemented in any language. If you want to use python, Python Programming certification is a great starting point. 

   def coin_change_sub(coins: List[int], amount: int, solutions: List[int]) -> int:   
       if amount < 0:  
           return -1  
       if amount == 0:  
           return 0  
       if solutions[amount - 1] != 0:  
           return solutions[amount - 1]  
       optimal_solution = float('inf')  

       For coin-in coins:  

           best_solution_for_coin = coin_change_sub(coins, amount - coin, solutions)  
           if 0 <= best_solution_for_coin < optimal_solution:  
               optimal_solution = best_solution_for_coin + 1  
 
       if optimal_solution == float('inf'):  
           solutions[amount - 1] = -1  

       Else:  

           solutions[amount - 1] = optimal_solution  
       return solutions[amount - 1]  
   def coin_change(coins: List[int], amount: int) -> int:  
       if amount < 1:  
           return 0  

       else:  

           return coin_change_sub(coins, amount, [0] * amount)

Advantages of Dynamic Programming

  1. Dynamic programming can be used to obtain local as well as the total optimal solution. 
  2. Dynamic programming algorithms are generally compact code pieces as they are based on recursive functions. 
  3. The linear and non-linear problem, both kind of problems can be solved using dynamic programming. 
  4. Dynamic programming algorithms are easy to debug. 

Disadvantages of Dynamic Programming

  1. Dynamic programming uses recursion, which requires more memory in the call stack, and leads to a stack overflow condition in the runtime. 
  2. It takes memory to store the solutions of each sub-problem. There is no guarantee that the stored value will be used later in execution. 
  3. High memory usage might lead to degraded performance. It depends on the dynamic programming algorithm and the programming language. For java, you can do Java certification to be able to use java efficiently. 

Conclusion

One might find dynamic programming a bit intimidating initially. But if one understands the basics well, one can master dynamic programming problems. Having a strong programming foundation is key to getting comfortable with such problems. Applications of dynamic programming are common and relevant to everyday challenges, and mastering dynamic programming gives you the superpower to tackle them. 

Frequently Asked Questions (FAQs)

1Is dynamic programming used in real life?

There are numerous applications of dynamic programming in real life. Finding the shortest path between the source and multiple destinations. Git merge uses DP coding to find the longest common subsequence. There are other applications like image processing, optimal inventory management, production optimization, genetic algorithms, and matrix multiplication dynamic programming; the list is endless.

2What is the difference between recursion and dynamic programming?

Recursion is calling a function within itself. Sub-problems might have to be solved multiple times when a problem is solved using recursion. At the same time, Dynamic programming is a technique where you store the result of the previous calculation to avoid calculating the same once again.

3Which algorithm uses a dynamic programming approach?

Algorithms that are aimed at solving optimization problems use a dynamic programming approach. Examples of dynamic programming algorithms are string algorithms like the longest common subsequence, longest increasing subsequence, and the longest palindromic substring. Optimizing the order for chain matrix multiplication. The Bellman-Ford algorithm for finding the shortest distance in a graph.

Profile

Paresh Patil

Author

Paresh is a Software developer by profession with a major experience in big data and backend development. Along with writing quality code and implementing robust system design, he takes a keen interest in generating maximum value for end-user. He loves "chai" and trekking.

Share This Article
Ready to Master the Skills that Drive Your Career?

Avail your free 1:1 mentorship session.

Select
Your Message (Optional)

Upcoming Programming Batches & Dates

NameDateFeeKnow more