23.11.2014 Views

Data Structures and Algorithms in Java[1].pdf - Fulvio Frisone

Data Structures and Algorithms in Java[1].pdf - Fulvio Frisone

Data Structures and Algorithms in Java[1].pdf - Fulvio Frisone

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

should the constant factors they "hide" be very large. For example, while it is true<br />

that the function 10 100 n is O(n), if this is the runn<strong>in</strong>g time of an algorithm be<strong>in</strong>g<br />

compared to one whose runn<strong>in</strong>g time is 10nlogn, we should prefer the O(nlogn)<br />

time algorithm, even though the l<strong>in</strong>ear-time algorithm is asymptotically faster.<br />

This preference is because the constant factor, 10 100 , which is called "one googol,"<br />

is believed by many astronomers to be an upper bound on the number of atoms <strong>in</strong><br />

the observable universe. So we are unlikely to ever have a real-world problem that<br />

has this number as its <strong>in</strong>put size. Thus, even when us<strong>in</strong>g the big-Oh notation, we<br />

should at least be somewhat m<strong>in</strong>dful of the constant factors <strong>and</strong> lower order terms<br />

we are "hid<strong>in</strong>g."<br />

The observation above raises the issue of what constitutes a "fast" algorithm.<br />

Generally speak<strong>in</strong>g, any algorithm runn<strong>in</strong>g <strong>in</strong> O(nlogn) time (with a reasonable<br />

constant factor) should be considered efficient. Even an O(n 2 ) time method may<br />

be fast enough <strong>in</strong> some contexts, that is, when n is small. But an algorithm<br />

runn<strong>in</strong>g <strong>in</strong> O(2 n ) time should almost never be considered efficient.<br />

Exponential Runn<strong>in</strong>g Times<br />

There is a famous story about the <strong>in</strong>ventor of the game of chess. He asked only<br />

that his k<strong>in</strong>g pay him 1 gra<strong>in</strong> of rice for the first square on the board, 2 gra<strong>in</strong>s for<br />

the second, 4 gra<strong>in</strong>s for the third, 8 for the fourth, <strong>and</strong> so on. It is an <strong>in</strong>terest<strong>in</strong>g<br />

test of programm<strong>in</strong>g skills to write a program to compute exactly the number of<br />

gra<strong>in</strong>s of rice the k<strong>in</strong>g would have to pay. In fact, any <strong>Java</strong> program written to<br />

compute this number <strong>in</strong> a s<strong>in</strong>gle <strong>in</strong>teger value will cause an <strong>in</strong>teger overflow to<br />

occur (although the run-time mach<strong>in</strong>e will probably not compla<strong>in</strong>). To represent<br />

this number exactly as an <strong>in</strong>teger requires us<strong>in</strong>g a BigInteger class.<br />

If we must draw a l<strong>in</strong>e between efficient <strong>and</strong> <strong>in</strong>efficient algorithms, therefore, it is<br />

natural to make this dist<strong>in</strong>ction be that between those algorithms runn<strong>in</strong>g <strong>in</strong><br />

polynomial time <strong>and</strong> those runn<strong>in</strong>g <strong>in</strong> exponential time. That is, make the<br />

dist<strong>in</strong>ction between algorithms with a runn<strong>in</strong>g time that is O(n c ), for some<br />

constant c> 1, <strong>and</strong> those with a runn<strong>in</strong>g time that is O(b n ), for some constant b ><br />

1. Like so many notions we have discussed <strong>in</strong> this section, this too should be<br />

taken with a "gra<strong>in</strong> of salt," for an algorithm runn<strong>in</strong>g <strong>in</strong> O(n 100 ) time should<br />

probably not be considered "efficient." Even so, the dist<strong>in</strong>ction between<br />

polynomial-time <strong>and</strong> exponential-time algorithms is considered a robust measure<br />

of tractability.<br />

To summarize, the asymptotic notations of big-Oh, big-Omega, <strong>and</strong> big-Theta<br />

provide a convenient language for us to analyze data structures <strong>and</strong> algorithms.<br />

As mentioned earlier, these notations provide convenience because they let us<br />

concentrate on the "big picture" rather than low-level details.<br />

Two Examples of Asymptotic Algorithm Analysis<br />

239

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!