# Lecture 11: Dynamic Progamming

Lecture 11: Dynamic Progamming

Lecture 11: Dynamic ProgammingCLRS Chapter 15Outline of this sectionIntroduction to Dynamic programming;a method for solving optimization problems.Dynamic programming vs. Divide and ConquerA few examples of Dynamic programming– the 0-1 Knapsack Problem– Chain Matrix Multiplication– All Pairs Shortest Path– The Floyd Warshall Algorithm: Improved AllPairs Shortest Path1

Recalling Divide-and-Conquer1. Partition the problem into particular subproblems.2. Solve the subproblems.3. Combine the solutions to solve the original one.Remark: In the examples we saw the subproblemswere usually independent, i.e. they did not call thesame subsubproblems. If the subsubproblems werenot independent, then D&C could be resolving manyof the same problems many times. Thus, it does morework than necessary!Dynamic programming (DP) solves every subsubproblemexactly once, and is therefore more efficient inthose cases where the subsubproblems are not indepndent.2

The Intuition behind Dynamic ProgrammingDynamic programming is a method for solvingoptimization problems.The idea: Compute the solutions to the subsub-problemsonce and store the solutions in a table, so that theycan be reused (repeatedly) later.Remark: We trade space for time.3

0-1 Knapsack ProblemInformal Description: We have items. Let ¡£¢ denotethe value of the ¤ -th item, and let ¥ ¢ denote theweight of the ¤ -th item. Suppose you are given a knapsackcapable of holding total weight ¦ .Our goal is to use the knapsack to carry items, suchthat the total values are maximum; we want to find asubset of items to carry such thatThe total weight is at most ¦ .The total value of the items is as large as possible.We cannot take parts of items, it is the whole item ornothing. (This is why it is called 0-1.)How should we select the items?4

General Schema of a DP SolutionStep1: Structure: Characterize the structure of anoptimal solution by showing that it can be decomposedinto optimal subproblemsStep2: Recursively define the value of an optimalsolution by expressing it in terms of optimal solutionsfor smaller problems (usually using¡£¢and/or ¤¦¥ ).Step 3: Bottom-up computation: Compute the valueof an optimal solution in a bottom-up fashion byusing a table structure.Step 4: Construction of optimal solution: Constructan optimal solution from computed information.6

Remarks on the Dynamic Programming ApproachSteps 1-3 form the basis of a dynamic-programmingsolution to a problem.Step 4 can be omitted if only the value of an optimalsolution is required.7

¢ willis a solution and* +¥"¡+" ¢ will£¦§, ¡ ¤ ¥ ¥ ¢¦")Developing a DP Algorithm for KnapsackStep 1: Decompose the problem into smallerproblems.We construct an ¡¡ ¥¡ ¦ array ¢.For # ¤ # , and ¡ # ¥ # ¦ , the entry¤ ¥ ¥ store the maximum (combined)value of any subset of items ¥ ¥¥ ¤ of (combined)weight at most ¥ .¡That is¢§¦©¨ £¥¤¦!&If we can compute all the entries of this array, thenthe array entry contain the solutionto our problem.¡¦! " !¦ ¢\$#%¦ ¨¨ '(Note: In what follows we will say that is a solutionfor¡ ¤ ¥ ¥ ¢ if ¥¥¥ ¤ and* +¥ #and that is an optimal solution for¡ ¤ ¥ ¥ if ¥ ¢ +8

Developing a DP Algorithm for KnapsackStep 2: Recursively define the value of an optimalsolution in terms of solutions to smaller problems.Initial Settings: Set¡¡ ¥ ¥ ¢ , ¡for ¡ # ¥ # ¦ , no itemfor , illegalRecursive Step: Usefor # ¤ # , ¡ # ¥ # ¦ .¡ ¤ ¥ ¥ ¢ , ¤¦¥¤£ ¡ ¤¥ ¥ ¢¥¡ ¢¦¥ ¡ ¤¥ ¥ ¥ ¢¢¨§Intuitively, an optimal solution would either choose item¤ is or not choose item ¤ .9

Developing a DP Algorithm for KnapsackStep 3: Bottom-up computation of(using iteration, not recursion).¡¤ ¥ ¥ ¢Bottom: ¡¡ ¥ ¥ ¢ , ¡for all ¡ # ¥ # ¦ .Bottom-up computation: Computing the table using¡ ¤ ¥ ¥ ¢ , ¤ ¥¤£ ¡ ¤¥ ¥ ¢¥¡ ¢ ¥ ¡ ¤row by row.¥ ¥ ¥ ¢¢¨§V[i,w] w=0 1 2 3 ... ... Wi= 012.n0 0 0 0 ... ... 0bottomup10

Let ¦¡¡The final output is£¤£¢¦£ inExample of the Bottom-up computation¡ and¤¡ ¢¥ ¢1 2 3 410 40 30 505 4 6 3,¢ 0£ ¤ ¢§¦¨ 12 3 4 5 6 7 8 9 100 0 0 0 0 0 0 0 0 01 0 0 0 0 0 10 10 10 10 10 102 0 0 0 0 40 40 40 40 40 50 503 0 0 0 0 40 40 40 40 40 50 704 0 0 0 50 50 50 50 90 90 90 90Remarks:The method described does not tell which subset gives theoptimal solution. is§ ¦¢(It this example). #. ¤¥11

¢¨&The Dynamic Programming Algorithmfor to ;for to )for )(¢ (¨if ) )£ ¤ ¦©¨ ¨ ¢ ¤ (¨ (¨¢\$#;¢§¦\$¨ %£ ¢¤£ £¦©¨¦ ¤ ¢¦¥ £ ¤ ¢¤£ £else£¦©¨ ¨ ¤ ¤ ¤ £KnapSack(¦¨ ¦ ¦¡ ) to££¦©¨;¢§¦\$¨ £¥¤ ¢¤£return£ ¤ ¦§ ;¤ £Time complexity: Clearly, © £ ¦ § .12

¢ which¢ and¢ describedConstructing the Optimal SolutionThe algorithm for computing ¤ ¥ ¥ inthe previous slide does not record which subsetof items gives the optimal solution.¡To compute the actual subset, we can add anauxiliary ¢¡£¡¥¤boolean ¤ ¥ ¥ array is 1 if wedecide to take ¤ the -th file in 0 otherwise.¡ ¡¤ ¥ ¥Question: How do we use ¢¡£¡¥¤all ¤ ¥ ¥ the values todetermine the subset of files having the maximumcomputing time?¢ ¡13

¤¥Constructing the Optimal SolutionQuestion: How do we ¢¡ ¡¥¤ use ¤ ¥ ¥ the values todetermine the subset of items having the maximumcomputing time? ¢ ¡If keep[ ¥ ¦ ] is 1, then¡ this argument for keep[and we repeat the argu-If ¥ ¦ keep[ ] is 0, ¢ thement for keep[¥ ¦ ].¡ . We can now repeat¥ ¦ ¥ ].Therefore, the following partial program will output theelements of :£ , ¦ ;for (¤ , downtoif (keep¡ ¤ ¥1)£¢ , ,)output i;£ , £¥ ¡ ¤¢;14

¨¢¢¢¨¢&¨ )The Complete Algorithm for the Knapsack Problemforfor (¢ (¨to£for(¨if ((¨¤)£to );)¦©¨ ¤and (£ ¤ ¢¤£ £¦©¨))¦ ¦¡ )¢ KnapSack(¦¨¤ ¢¦¥ £ ¤ ¢ £ £¦©¨ £ ¨ ¤ ¢¡¢¢£¥¤ ¢§¦©¨ ¤ ¢¦¥ £ ¤ ¢ £ £¦©¨ £ ¨ ¤ ¢;;¢§¦¨ ¨ keep¤else to£)¢£¥¤ ¢§¦©¨ £ ¤ ¢¤£ £¦©¨;;;forif ¢§¦¢(keep¤(¢¢§¦¨ keep¤downto 1)£ output i;£ ¨ ¤¢;return£¨¤ ¦;15

¡¡ ¥ ¥ ¢ , ¡Dynamic Programming vs. Divide-and-ConquerThe Dynamic Programming algorithm developed runs© £ ¦ § in time.We started by deriving a recurrence relation for solvingthe problem¡ ¤ ¥ ¥ ¢ , ¤¦¥¤£ ¡ ¤¥ ¥ ¢¥¡ ¢ ¥ ¡ ¤¥ ¥ ¥ ¢¢ §Question: why can’t we simply write a top-down divideand-conqueralgorithm based on this recurrence?Answer: we could, but it could run in time £ §since it might have to recompute the same valuesmany times.Dynamic programming saves us from having to recomputepreviously calculated subsolutions!16

Final CommentDivide-and-Conquer works Top-Down.Dynamic programming works Bottom-Up.17

More magazines by this user
Similar magazines