31.05.2013 Views

think-cell technical report TC2003/01 A GUI-based Interaction ...

think-cell technical report TC2003/01 A GUI-based Interaction ...

think-cell technical report TC2003/01 A GUI-based Interaction ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

5.2 An Application of Dynamic Programming IMPLEMENTATION<br />

5.2.3 Finding the Optimal Solution in Polynomial Time<br />

The analysis of the recursive algorithm reveals that the local costs for each node<br />

are calculated up to three times. Also, the graph in figure 42 has a rectangular<br />

structure, suggesting that the results of the cost calculations could be cached in a<br />

regular matrix. This is precisely what the dynamic programming technique describes<br />

(see Fig. 44): A matrix is filled with cumulated minimal costs and then the optimal<br />

solution is reconstructed from the matrix. To make the reconstruction possible, not<br />

only costs must be stored in the matrix, but also the locally optimal decisions.<br />

The respective algorithm is presented in figure 45. The iterative implementation<br />

of the MergeGridlines() function has the same interface (input and output) as<br />

the recursive variant. However, it runs in polynomial time: The loop body is<br />

executed m + n(m + 1) times, where m and n are the numbers of destination and<br />

source gridlines in the input. Again, I assume the cost function to be of polynomial<br />

time complexity. Thus, the asymptotic running time for the dynamic programming<br />

algorithm is Θ(nm) or Θ(n 2 ), if we assume n and m to be of the same order.<br />

In this case, the cost function is called CostPrev(), because it calculates the costs<br />

for a decision that has already been made to reach the current state. The algorithm<br />

then stores only the decision that has minimal cumulated cost and disregards the<br />

other ones.<br />

Once the matrix is filled, the bottom right corner reflects the total costs of the<br />

optimal path, which can now be reconstructed by following the matching decision<br />

annotations. The reconstruction of the optimal path is trivial and takes linear<br />

time 15 , thus not contributing to the over-all asymptotic running time of the algo-<br />

rithm.<br />

Dest.<br />

Source<br />

1<br />

2<br />

3<br />

4<br />

5<br />

Solution<br />

1 2 3 4 5 6<br />

0<br />

SOURCE<br />

7<br />

SOURCE<br />

11<br />

SOURCE<br />

26<br />

SOURCE<br />

58<br />

DEST<br />

20<br />

SOURCE DEST IDENTIFY IDENTIFY DEST DEST<br />

96 131 81 97 105 139<br />

Solution<br />

DEST DEST DEST DEST DEST<br />

23 77 165 317 428<br />

SOURCE IDENTIFY DEST DEST IDENTIFY DEST<br />

29 31 46 59 178 210<br />

DEST SOURCE DEST SOURCE IDENTIFY IDENTIFY<br />

18 35 41 61 74 199<br />

SOURCE DEST IDENTIFY SOURCE SOURCE DEST<br />

24 29 55 79 115 171<br />

SOURCE SOURCE DEST IDENTIFY DEST DEST<br />

70 89 133 83 109 147<br />

IDENTIFY<br />

134<br />

Figure 44: Dynamic programming cost matrix with optimal path reconstruction,<br />

comp. Figs. 41 and 42<br />

15 The asymptotic running time for optimal path reconstruction is O(n + m), because in each<br />

run of the while loop either i or j or both are decreased.<br />

80

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!