12.07.2015 Views

Dynamic Programming 1

Dynamic Programming 1

Dynamic Programming 1

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

A!B! C!B!E: solution EF: solution FG: solution GB: solution BE! F! G! E! G! H!E! F! G!Find the proper ordering for the subtasksBuild a table of results as we goThat way do not have to recompute any intermediate resultsCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 4


A!A!B! C!B!B! C!E! F! G! E! G! H!E! F! G!E! F! G! H!CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 5


Fn⎧⎧F if 1n−1 + Fn−2 n >⎪⎪= ⎨⎨ 1 if n = 1⎪⎪⎩⎩ 0 if n = 0! 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …! Exponential if we just implement the algorithm directly! DP approach: Build a table with dependencies, store anduse intermediate results – O(n)CS 312 - Algorithm Analysis 6


! 5 2 8 6 3 6 9 7– 2 3 6 7! Consider the sequence a graph of n nodes! What algorithm would you use to find longest increasingsubsequence?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 7


! 5 2 8 6 3 6 9 7– 2 3 6 7! Consider sequence a graph of n nodes! What algorithm would you use to find longest increasingsubsequence?! Could try all possible paths– 2 n possible paths (why)?! There are less increasing paths– Complexity is n·2 n– Very expensive because lots of work done multiple times! sub-paths repeatedly checkedCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 8


! Complexity: O(n·average_indegree) which is worst cast O(n 2 )– Memory Complexity? – must store intermediate results to avoidrecomputes O(n)! Assumes sorted DAG which would also be O(n 2 ) to create! Note that for our longest increasing subsequence problem we getthe length, but not the path! Markovian assumption – not dependant on history, just current/recent states! Can fix this (ala Dijkstra) by also saving prev(j) each time we findthe max L(j) so that we can reconstruct the longest path! Why not use divide and conquer style recursion?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 10


! Why not use divide and conquer style recursion?! Recursive version is exponential (lots of redundant work)! Versus an efficient divide and conquer that cuts the problem size bya significant amount at each call and minimizes redundant work! This case just goes from a problem of size n to size n-1 at each callCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 11


! Anytime we have a collection of subproblems such that:! There is an ordering on the subproblems, and a relation thatshows how to solve a subproblem given the answers to"smaller" subproblems, that is, subproblems that appear earlierin the ordering! Problem becomes an implicit DAG with each subproblemrepresented by a node, with edges giving dependencies– Just one order to solve it? - Any linearization! Does largest increasing subsequence algorithm fit this?– Ordering is in the for loop – an appropriate linearization, finish L(1)before starting L(2), etc.– Relation is L(j) = 1 + max{L(i) : (i,j) ∈ E}CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 12


! DP is optimal when the optimality property is met– First make sure solution is correct! The optimality property: An optimal solution to a problemis built from optimal solutions to sub-problems! Question to consider: Can we divide the problem into subproblemssuch that the optimal solutions to each of thesub-problems combine into an optimal solution for theentire problem?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 13


! The optimality property: An optimal solution to a problemis built from optimal solutions to sub-problems! Consider Longest Increasing Subsequence algorithm! Is L(1) optimal?! As you go through the ordering does the relation alwayslead to an optimal intermediate solution?! Note that the optimal path from j to the end is independentof how we got to j (Markovian)! Thus choosing the longest incoming path must be optimal! Not always the case for arbitrary problemsCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 14


! How many ways to choose k items from a set of size n (n choose k)! Divide and Conquer?! Is there an appropriate ordering and relationships for DP?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 16


C(5,3)"C(4,2)"C(4,3)"C(3,1)" C(3,2)" C(3,2)" C(3,3)"C(2,0)" C(2,1)" C(2,1)"C(2,2)" C(2,1)"C(2,2)"C(1,0)"C(1,1)" C(1,0)"C(1,1)" C(1,0)"C(1,1)"1"1"1" 1" 1" 1" 1" 1"1"CS 312 – <strong>Dynamic</strong> <strong>Programming</strong>1"17


C(5,3)"C(4,2)"C(4,3)"C(3,1)" C(3,2)" C(3,3)"C(2,0)"C(2,1)"C(1,0)"C(1,1)"C(2,2)"CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 18


! Figure out the variables and use them to index the table! Figure out the base case(s) and put it/them in the table first! Show the DAG dependencies and fill out the table until weget to the desired answer! Let's do it for C(5,3)CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 19


n k: 0 1 2 30 1 0 0 01 1 1 0 02 1 1 03 1 14 15 1CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 20


n k: 0 1 2 30 1 0 0 01 1 1 0 02 1 2 1 03 1 14 15 1CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 21


n k: 0 1 2 30 1 0 0 01 1 1 0 02 1 2 1 03 1 3 3 14 1 4 6 45 1 5 10 10! What is the complexity?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 22


n k: 0 1 2 30 1 0 0 01 1 1 0 02 1 2 1 03 1 3 3 14 1 4 6 45 1 5 10 10! What is the complexity? Number of cells (table size) ×complexity to compute each cellCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 23


n k: 0 1 2 30 1 0 0 01 1 1 0 02 1 2 1 03 1 3 3 14 1 4 6 45 1 5 10 1015 1• Notice a familiar pattern?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 24


Blaise Pascal (1623-1662)• Second person to invent the calculator• Religious philosopher• Mathematician and physicist• Pascal's Triangle is a geometric arrangement of thebinomial coefficients in a triangle• Pascal's Triangle holds many other mathematical patterns


! A natural measure of similarity between two strings is the extent towhich they can be aligned, or matched up! !TACO ! !T-ACO = !TACO ! !TA-CO!! !TEXCO! !TEXCO! !TXCO ! !TEXCO!!! "-" indicates a gap (insertion)– Note that an insert from the point of view of one string is the same as adelete from the point of view of the other– We'll just say insert from now on to keep it simple (rightmost above)! The Edit Distance between two strings is the minimum number ofedits to convert one string into the other: insert (delete) or substitute– What is edit distance of above example?– What is our algorithm to calculate edit distance?! Number of possible alignments grows exponentially with stringlength n, so we try DP to solve it efficientlyCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 26


! Two things to consider1. Is there an ordering on the subproblems, and a relationthat shows how to solve a subproblem given the answersto "smaller" subproblems, that is, subproblems that appearearlier in the ordering2. Is it the case that an optimal solution to a problem is builtfrom optimal solutions to sub-problemsCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 27


! Assume two strings x and y of length m and n respectively! Consider the edit subproblem E(i,j) = E(x[1…i], y[1…j])! For x = "taco" and y = "texco" E(2,3) = E("ta","tex")! What is E(1,1) for this problem? and in general?– Would our approach be optimal for E(1,1)?! The final solution would then be E(m,n)! This notation gives a natural way to start from small casesand build up to larger ones! Now, we need a relation to solve E(i,j) in terms of smallerproblemsCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 28


! Start building a table– What are base cases?– What is the relationship of the next open cell based on previouscells?– Back pointer, note that it never changes - Markovian property! E(i,j) = ?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 29


! Start building a table– What are base cases?– What is the relationship of the next open cell based on previouscells?– Back pointer, note that it never changes - Markovian property! E(i,j) = min[diff(i,j) + E(i-1,j-1), 1 + E(i-1,j), 1 + E(i,j-1)]! Intuition of current cell based on preceding adjacent cells– Diagonal is a match or substitution– Coming from top cell represents an insert into top word! i.e. a delete from left word– Coming from left cell represents an insert into left word! i.e. a delete from top wordCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 30


! If we consider an empty cell of E(i,j) there are only three possiblealignments (e.g. E(2,2) = E("ta", "te"))– x[i] aligned with "-": cost = 1 + E(i-1,j) - top cell, insert top word! E("ta","te") leads to alignment t - with cost 1 + E("t","te")t a– y[j] aligned with "-": cost = 1 + E(i,j-1) left cell, insert left word! E("ta","te") leads to alignment t e with cost 1 + E("ta","t")– x[i] = y[j]: cost = diff(i,j) + E(i-1,j-1)! E("ta","te") leads to alignment t a with cost 1 + E("t","t")! Thus E(i,j) = min[1 + E(i-1,j), 1 + E(i,j-1), diff(i,j) + E(i-1,j-1)]t -t aCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 31


! E(i,j) = min[1 + E(i-1,j), 1 + E(i,j-1), diff(i,j) + E(i-1,j-1)]! Note that we could use different penalties for insert andsubstitution based on whatever goals we have! Answers fill in a 2-d table! Any computation order is all right as long as E(i-1,j),E(i,j-1), and E(i-1,j-1) are computed before E(i,j)! What are base cases? (x is any integer ≥ 0):– E(0,x) = x example: E("", "rib") = 3 (3 inserts)– E(x,0) = x example: E("ri", "") = 2 (2 inserts)! If we want to recover the edit sequence found we just keepa back pointer to the previous minimum as we grow thetableCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 32


For i = 0,1,2,…, m E(i,0) = i // length of string(x) - ExponentialFor j = 0,1,2,…, n E(0,j) = j // length of string(y) - PolynomialFor i = 1,2,…, mFor j = 1,2,…, nE(i,j) = min[1 + E(i-1,j), 1 + E(i,j-1), diff(i,j) + E(i-1,j-1)]Return E(m,n)What is Complexity?33


! This is a weighted DAG with weightsof 0 and 1. We can just find the leastcost path in the DAG to retrieveoptimal edit sequence(s)– Down arrows are insertions into"Polynomial" with cost 1– Right arrows are insertions into"Exponential" with cost 1– Diagonal arrows are either matches(dashed) with cost 0 or substitutions withcost 1! Edit distance of 6EXPONEN-TIAL!--POLYNOMIAL!! Can set costs arbitrarily based on goalsCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 34


! Basic table is m × n which is O(n 2 ) assuming m and n aresimilar! What order options can we use to calculate cells! But do we really need to use O(n 2 ) memory?! How can we implement edit-distance using only O(n)memory?! What about prev pointers and extracting the actualalignment?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 35


X=ACGCTC !!Y=ACTTG!CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 36


! Gene Sequence Alignment a type of Edit DistanceACGCT-C!A--CTTG!– Uses Needleman-Wunsch Algorithm– This is just edit distance with a different cost weighting– You will use Needleman-Wunsch in your project! Cost (Typical Needleman-Wunsch costs are shown):– Match: c match = -3 (a reward)– Insertion into x (= deletion from y): c indel = 5– Insertion into y (= deletion from x): c indel = 5– Substitutions of a character from x into y (or from y into x): c sub = 1! You will use the above costs in your HW and project– Does that change the base cases?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 37


! You will implement two versions (using Needleman-Wunsch )– One which gives the match score in O(n 2 ) time and O(n) space andwhich does not extract the actual alignment– The other will extract the alignment and will be O(n 2 ) time andspace! You will align 10 supplied real gene sequences with eachother (100/2 = 50 alignments)– atattaggtttttacctacc– caggaaaagccaaccaact– You will only align the first 5000 bases in each taxa– Some values are given to you for debugging purposes, your otherresults will be used to test your code correctnessCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 38


! Given items x 1 , x 2 ,…, x n! each with weight w i and value v i! find the set of items which maximizes the totalvalue ∑x i v i! under the constraint that the total weight of theitems ∑x i w i is does not exceed a given W! Many resource problems follow this pattern– Task scheduling with a CPU– Allocating files to memory/disk– Bandwidth on a network connection, etc.! There are two variations depending on whether anitem can be chosen more than once (repetition)CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 39


Item Weight Value1 6 $302 3 $143 4 $164 2 $9W = 10! Will greedy always work?! Exponential number of item combinations– 2 n for Knapsack without repetition – why?– Many more for Knapsack with repetition! How about DP?– Always ask what are the subproblemsCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 40


! Two types of subproblems possible– consider knapsacks with less capacity– consider fewer items! Define K(w) = maximum value achievable with a knapsackof capacity w– Final answer is K(W)! Subproblem relation – if K(w) includes item i, thenremoving i leaves optimal solution K(w-w i )– Can only contain i if w i ≤ w! Thus K(w) = max i:wi ≤w [K(w – w i ) + v i ]! Note that it is not dependent on a n-1 type recurrence likeedit distance)CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 41


K(0) = 0for w = 1 to WK(w) = max i:wi ≤w [K(w – w i ) + v i ]return(K(W))Item Weight Value1 6 $302 3 $143 4 $164 2 $9! Build Table – Table size? – Do example! Complexity is ?W = 10CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 42


K(0) = 0for w = 1 to WK(w) = max i:wi ≤w [K(w – w i ) + v i ]return(K(W))Item Weight Value1 6 $302 3 $143 4 $164 2 $9! Build Table – Table size?! Complexity is O(nW)! Insight: W can get very large, n is typically proportional to log b (W)which would make the order in n be O(nb n ) which is exponential in n! More on complexity issues in Ch. 8W = 10CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 43


function K(w)if w = 0 return(0)K(w) = max i:wi ≤w [K(w – w i ) + v i ]return(K(W))K(0) = 0for w = 1 to WK(w) = max i:wi ≤w [K(w-w i ) + v i ]return(K(W))function K(w)if w = 0 return(0)if K(w) is in hashtable return(K(w))K(w) = max i:wi ≤w [K(w – w i ) + v i ]insert K(w) into hashtablereturn(K(w))• Recursive version could do lots of redundantcomputations plus the overhead of recursion• However, would if we insert all intermediatecomputations into a hash table – Memoize• Usually still solve all the same subproblems withrecursive DP or normal DP (e.g. edit distance)• For knapsack we might avoid unnecessarycomputations in the DP table because w isdecremented by w i (more than 1) each time.• Still O(nW) but with better constants than DP forsome cases44


! Insight: When can DP gain efficiency by recursively startingfrom the final goal and only solving those subproblems requiredfor the specific goal?– If we knew exactly which subproblems were needed for the specificgoal we could have done a more direct (best-first) approach– With DP, we do not know which of the subproblems are needed so wecompute all that might be needed! However, in some cases the final solution will never require thatcertain previous table cells be computed! For example if there are 3 items in knapsack, with weights 50,80, and 100, we could do recursive DP and avoid computingK(75), K(76), K(77), etc. which could never be necessary, butwould have been calculated with the standard DP algorithm! Would this approach help us for Edit Distance?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 45


! Our relation now has to track what items are available! K(w,j) = maximum value achievable given capacity w and onlyconsidering items 1,…, j– Means only items 1,…, j are available, but we actually just use some subset! Final answer is K(W,n)! Express relation as: either the j th item is in the solution or not! K(w,j) = max [K(w – w j , j-1) + v j , K(w, j-1)]– If w j > w then ignore first case! Base cases?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 46


! Our relation now has to track what items are available! K(w,j) = maximum value achievable given capacity w and onlyconsidering items 1,…, j– Means only items 1,…, j are available, but we actually just use some subset! Final answer is K(W,n)! Express relation as: either the j th item is in the solution or not! K(w,j) = max [K(w – w j , j-1) + v j , K(w, j-1)]– If w j > w then ignore first case! Base cases?! Running time is still O(Wn), and table is W+1 by n+1CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 47


Item Weight Value1 6 $302 3 $143 4 $164 2 $9W = 10CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 48


w=0 1 2 3 4 5 6 7 8 9 100 0 0 0 0 0 0 0 0 0 0 01 02 03 04 0Item Weight Value1 6 $302 3 $143 4 $164 2 $9W = 10


w=0 1 2 3 4 5 6 7 8 9 100 0 0 0 0 0 0 0 0 0 0 01 0 0 0 0 0 0 302 03 04 0Item Weight Value1 6 $302 3 $143 4 $164 2 $9W = 10


w=0 1 2 3 4 5 6 7 8 9 100 0 0 0 0 0 0 0 0 0 0 01 0 0 0 0 0 0 30 30 30 30 302 03 04 0Item Weight Value1 6 $302 3 $143 4 $164 2 $9W = 10


w=0 1 2 3 4 5 6 7 8 9 100 0 0 0 0 0 0 0 0 0 0 01 0 0 0 0 0 0 30 30 30 30 302 0 0 0 14 14 14 30 30 30 44 443 04 0Item Weight Value1 6 $302 3 $143 4 $164 2 $9W = 10


! Chains of Matrix Multiplies are common in numerical algorithms! Matrix Multiply is not commutative but is associative– A · (B · C) = (A · B) · C– Parenthesization can make a big difference in speed– Multiplying an m × n matrix with an n × p matrix takes O(mnp) time andresults in a matrix of size m × pCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 53


! Want to multiply A 1 × A 2 × ··· × A n– with dimensions m 0 × m 1 , m 1 × m 2 , ··· , m n-1 × m n! A linear ordering for parenthesizations is not natural, but couldrepresent them as a binary tree– Possible orderings are exponential– Consider cost for each subtree– C(i,j) = minimal cost of multiplying A i × A i+1 × ··· × A j 1 ≤ i ≤ j ≤ n! C(i,j) represents the cost of j-i matrix multiplies– Total problem is C(1,n)CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 54


! Each subtree breaks the problem into two more subtrees such that the leftsubtree has cost C(i,k) and the right subtree has cost C(k+1,j) for some kbetween i and j (e.g. What is C(3,8) – given 10 matrices?)! The cost of the original subtree is the cost of its two children subtrees plusthe cost of combining those subtrees! C(i,j) = min i≤k


! Each subtree breaks the problem into two more subtrees such that the leftsubtree has cost C(i,k) and the right subtree has cost C(k+1,j) for some kbetween i and j! The cost of the original subtree is the cost of its two children subtrees plusthe cost of combining those subtrees! C(i,j) = min i≤k j is undefined! Final solution is C(1,n)! Table is n 2 and each entry requires O(k) = O(n) work for total O(n 3 )m 0 = 50, m 1 = 20, m 2 = 1, m 3 = 10, m 4 = 100CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 56


m 0 = 50, m 1 = 20, m 2 = 1, m 3 = 10, m 4 = 100s i j k n-s min terms (one for each k) C(i,j)1 1 2 1 3 C(1,1)+C(2,2)+50·20·1 = 0+0+1000 = 1000 10002 3 2 C(2,2)+C(3,3)+20·1·10 = 0+0+200 = 200 2003 4 3 C(3,3)+C(4,4)+1·10·100 = 0+0+1000 = 1000 10002 1 3 122 4 233 1 4 1232 C(1,1)+C(2,3)+50·20·10 = 0+200+10,000 = 10,200C(1,2)+C(3,3)+50·1·10 = 1000+0+500 = 1500C(2,2)+C(3,4)+20·1·100 = 0+1000+2000 = 3000C(2,3)+C(4,4)+20·10·100 = 200+0+20,000 = 20,2001 C(1,1)+C(2,4)+50·20·100 = 0+3000+10,000 = 103,000C(1,2)+C(3,4)+50·1·100 = 1000+1000+5000 = 7000C(1,3)+C(4,4)+50·10·100 = 1500+0+50,000 = 51,50015003000700057


! We used BFS, Dijkstra's and Bellman-Ford to solveshortest path problems for different graphs– Dijkstra and Bellman-Ford can actually be cast as DP algorithms! DP also good for these types of problems and often better! All Pairs Shortest Paths– Assume graph G with weighted edges (which could be negative)– We want to calculate the shortest path between every pair of nodes– We could use Bellman-Ford (which has complexity O(|V| · |E|))one time each for every node– Complexity would be |V| · (|V| · |E|) = O(|V| 2 · |E|)! Floyd's algorithm using DP can do it in O(|V| 3 )– You'll do this for a homeworkCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 58


! Arbitrarily number the nodes from 1 to n! Define dist(i,j,k) as the shortest path from (between if notdirected) i to j which can pass through nodes {1,2,…,k}! First assume we can only have paths of length one (i.e. withno intermediate nodes on the path) and store the best pathsdist(i,j,0) which is just the edge length between i and j! What is relation dist(i,j,k) = ?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 59


! Arbitrarily number the nodes from 1 to n! Define dist(i,j,k) as the shortest path from (between if notdirected) i to j which can pass through nodes {1,2,…,k}! First assume we can only have paths of length one (i.e. withno intermediate nodes on the path) and store the best pathsdist(i,j,0) which is just the edge length between i and j! dist(i,j,k) = min(dist(i,k,k-1) + dist(k,j,k-1), dist(i,j,k-1))! Can think of memory as one n×n (i,j) matrix for each value k! Base cases! What is the algorithm?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 60


! Time and Space Complexity! Does space need to be n 3 ?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 61


! Assume n cities (nodes) and an intercity distance matrix D = {d ij }! We want to find a path which visits each city once and has theminimum total length! TSP is in NP: No known polynomial solution! Why not start with small optimal TSP paths and then just add the nextcity, similar to previous DP approaches?– Can't just add new city to the end of a circuit– Would need to check all combinations of which city to have prior to thenew city, and which city to have following the new city– This could cause reshuffling of the other citiesCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 62


! Could try all possible paths of G and take the minimum– There are n! possible paths, and (n-1)! unique paths if we alwaysset city 1 to node 1! DP approach much faster but still exponential (more later)! For S ⊆ V and including node 1, and j ∈ S, let C(S,j) be theminimal TSP path of S starting at 1 and ending at j! For |S| > 1 C(S,1) = ∞ since path cannot start and end at 1! Relation: consider each optimal TSP cycle ending in a cityi, and then find total if add edge from i to new last city j! C(S,j) = min i ∈ S:i≠j C(S-{j},i) + d ij! What is table size?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 63


! Table is n × 2 n! Algorithm has n × 2 n subproblems each taking time n! Time Complexity is thus O(n 2 2 n )! Trying each possible path has time complexity O(n!)– For 100 cities DP is 100 2 ×2 100 = 1.3×10 34– Trying each path is 100! = 9.3×10 157– Thus DP is about 10 134 times faster for 100 cities! We will consider approximation algorithms in Ch. 9CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 64


! sdfsdfS1 2 3 41,1 01,2 ∞ 31,3 ∞ 51,4 ∞ 91,2,3 ∞1,2,4 ∞1,3,4 ∞1,2,3,4 ∞1 2 3 41 0 3 5 92 0 1 23 0 64 0C({1,2}, 2) = min{C({1,1},1)+d 12 } = min{0+3} = 365


! sdfsdfS1 2 3 41,1 01,2 ∞ 31,3 ∞ 51,4 ∞ 91,2,3 ∞ 5+1=6 3+1=41,2,4 ∞ 9+2=11 3+2=51,3,4 ∞ 9+6=15 5+6=111,2,3,4 ∞1 2 3 41 0 3 5 92 0 1 23 0 64 0C({1,2,3}, 2) = min{C({1,3},3)+d 32 } = min{5+1} = 666


! sdfsdfS1 2 3 41,1 01,2 ∞ 31,3 ∞ 51,4 ∞ 91,2,3 ∞ 5+1=6 3+1=41,2,4 ∞ 9+2=11 3+2=51,3,4 ∞ 9+6=15 5+6=111,2,3,4 ∞ 13 9 81 2 3 41 0 3 5 92 0 1 23 0 64 0C({1,2,3,4}, 2) = min{C({1,3,4},3)+d 32 , C({1,3,4},4)+d 42 } =min{15+1, 11+2} = 1367


! sdfsdfS1 2 3 41,1 01,2 ∞ 31,3 ∞ 51,4 ∞ 91,2,3 ∞ 5+1=6 3+1=41,2,4 ∞ 9+2=11 3+2=51,3,4 ∞ 9+6=15 5+6=111,2,3,4 ∞ 13 9 81 2 3 41 0 3 5 92 0 1 23 0 64 0return(min{C({1,2,3,4}, 2)+d 21 , C({1,2,3,4}, 3)+d 31 , C({1,2,3,4}, 4)+d 41 ,} =min{13+3, 9+5, 8+9} = 1468


! Many applications can gain efficiency by use of <strong>Dynamic</strong><strong>Programming</strong>! Works when there are overlapping subproblems– The recursive approach would lead to much duplicate work! And when subproblems (given by a recursive definition) areonly slightly (constant factor) smaller than the originalproblem– If smaller by a multiplicative factor, use divide and conquerCS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 69


! Example Applications– Fibonacci– String algorithms (e.g. edit-distance, gene sequencing, longestcommon substring, etc.)– Dykstra's algorithm– Bellman-Ford– <strong>Dynamic</strong> Time Warping– Viterbi Algorithm – critical for HMMs, Speech Recognition, etc.– Recursive Least Squares– Knapsack style problems, Coins, TSP, Towers-of Hanoi, etc.! Can you think of some?CS 312 – <strong>Dynamic</strong> <strong>Programming</strong> 70

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!