Dynamic Programming - An Introduction

Dynamic programming is an affective and efficient way to solve a particular paradigm of programming problems. Such problems exhibits following two properties:

  • Optimal Substructure
  • Overlapping subproblems


A problem has optimal substructure if the complete problem can be optimally solved by combining the solution of its various subproblems. In other words, if the problem can be written down in some sort of recurrence relation / dependency equation consisting of smaller subproblems, then the problem contains optimal substructure. To understand it better, let us pick up a fairly common program - Fibonacci Series.

Fibonacci series goes like this - 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
To see if this problem contains optimal substructure or not, we need to figure out if we can divide the problem into subproblems and whether solving those subproblems will give us the optimal solution.

Let us consider F(n) to be the nth Fibonacci number. Consider F(1) = 0 and F(2) = 1 as the exceptions (let us call them base cases). Rest all follows the following recurrence relation:

F(n) = F(n-1) + F(n-2)

F(3) = F(2) + F(1) = 1 + 0 = 1
F(4) = F(3) + F(2) = 1 + 1 = 2
F(5) = F(4) + F(3) = 2 + 1 = 3
F(6) = F(5) + F(4) = 3 + 2 = 5
F(7) = F(6) + F(5) = 5 + 3 = 8
F(8) = F(7) + F(6) = 8 + 5 = 13
F(9) = F(8) + F(7) = 13 + 8 = 21
F(10) = F(9) + F(8) = 21 + 13 = 34


So, it is quite evident that nth term of the series can easily be calculated from the results of (n-1)th and (n-2)th terms. Which is a perfect example of optimal substructure where in to solve a particular problem we can use the solutions for smaller subproblems. The natural way to solve such problems is to use recursion:


int fib(int n)
{
    if (n < 0)
        return 0;
       
    if (n < 2)
        return n;
       
    return fib(n - 1) + fib(n - 2);
}


Now, let us try to understand what is meant by a problem having overlapping subproblems and how Dynamic Programming can help us in such situations. Again, let us head back to our previous example of Fibonacci series. If you observe closely, you will find that in recursive solution, we end up solving the same subproblem multiple number of times. For example, we evaluate F(2) while we evaluate F(3), F(4). We evaluate F(3) while we evaluate F(4) and F(5) and so on. Which means a lot of recursive function calls for the same subproblem again and again. If we look at it from performance angle, it isn't optimized by any means. See the following figure for reference:


Dynamic Programming - an introduction | EduSagar

This is what is exactly meant by overlapping subproblems and is the perfect example where we should apply Dynamic Programming to find an optimal solution. Dynamic programming takes care of overlapping subproblems in two ways:

1. Top-down approach: In Top down approach, we solve the bigger subproblem first and then target the smaller ones - similar to recursive program above. Simply optimize this recursive program and store away the results in some global array, so that the next time we recurse back to the same subproblem - we return back the result directly form the array instead of recomputing it. This technique is known as memoization. Here is the program:

int fib(int n)
{
    int temp;
   
    if (n < 0)
        return 0;
       
    if (n < 2) {
        arr[n] = n;
        return n;
    }

    if (arr[n] != -1) {
return arr[n];
}

    temp = fib(n-1) + fib(n-2);
    arr[n] = temp;
    return temp;
}


2. Bottom-up approach: we know that F(n) depends on F(n-1) and F(n-2). So, if we start solving problem in bottom-up manner - we would solve F(n-1) and F(n-2) before we gear up to solve F(n). Which means while we try to solve F(6), we would have already solved F(5) and F(4) and if they are stored in some array, we can just reuse these values to compute F(6) instead of recursively calling multiple subroutines for the same subproblem which has already been solved. Here is the program:

int main()
{
    int curr, prev, prev_prev;
    int i = 0;
    int n = 10;
   
    if (n < 0) {
        printf("No solution\n");
    }
   
    if (n < 2) {
        printf("Output: %d\n", n);
    }
   
    prev = 1;
    prev_prev = 0;
    for (i = 2; i < n; i++) {
        curr = prev + prev_prev;
        prev_prev = prev;
        prev = curr;
    }
   
    printf("Output: %d\n", curr);
   
    return 0;
}


In both the cases, we can see that using Dynamic Programming we cut down on the Time complexity but that comes at the cost of Extra Space needed to store intermediate subproblem's results in memory. Phew !! Even DP can't get away with Time-Space complexity issue !

Whenever you are writing DP related programs, bear in mind that the following pattern may be followed:

  1. You need to traverse over your dataset and try each and every possible subproblem.
  2. Use extra space to save intermediate results for subproblems.
  3. Reuse the already saved results instead of recomputing.

Also note, that if we don't have overlapping subproblems - Dynamic Programming approach may not give the most optimal solution. Consider mergesort for example - the subproblems aren't overlapping in nature since each subproblem is a new set of data to be worked upon. Due to this property of subproblems, it is best to use recursion to solve this. If we use Dynamic Programming to solve this, we might add on to the space complexity without reducing Time complexity significantly. Problems like this belong to another programming paradigm known as - Divide and Conquer.

So, this is the very basic overview of the Dynamic programming paradigm which should at least get you acquainted with the idea upon which you can build your solutions. Most of the DP programs here on EduSagar follow a similar pattern -  we discuss the problem and the thought process behind it, A recursive solution to understand the recurrence relation, and finally a DP solution to improve upon the shortcomings of solving similar subproblems again and again.

Be assured that it will be fun to solve DP problems, so Have Fun !!