12.07.2015 Views

A Practical Introduction to Data Structures and Algorithm Analysis

A Practical Introduction to Data Structures and Algorithm Analysis

A Practical Introduction to Data Structures and Algorithm Analysis

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

68 Chap. 3 <strong>Algorithm</strong> <strong>Analysis</strong>when we want an estimate of the running time or other resource requirements ofan algorithm. This simplifies the analysis <strong>and</strong> keeps us thinking about the mostimportant aspect: the growth rate. This is called asymp<strong>to</strong>tic algorithm analysis.To be precise, asymp<strong>to</strong>tic analysis refers <strong>to</strong> the study of an algorithm as the inputsize “gets big” or reaches a limit (in the calculus sense). However, it has proved <strong>to</strong>be so useful <strong>to</strong> ignore all constant fac<strong>to</strong>rs that asymp<strong>to</strong>tic analysis is used for mostalgorithm comparisons.It is not always reasonable <strong>to</strong> ignore the constants. When comparing algorithmsmeant <strong>to</strong> run on small values of n, the constant can have a large effect. For example,if the problem is <strong>to</strong> sort a collection of exactly five records, then an algorithmdesigned for sorting thous<strong>and</strong>s of records is probably not appropriate, even if itsasymp<strong>to</strong>tic analysis indicates good performance. There are rare cases where theconstants for two algorithms under comparison can differ by a fac<strong>to</strong>r of 1000 ormore, making the one with lower growth rate impractical for most purposes due <strong>to</strong>its large constant. Asymp<strong>to</strong>tic analysis is a form of “back of the envelope” estimationfor algorithm resource consumption. It provides a simplified model of therunning time or other resource needs of an algorithm. This simplification usuallyhelps you underst<strong>and</strong> the behavior of your algorithms. Just be aware of the limitations<strong>to</strong> asymp<strong>to</strong>tic analysis in the rare situation where the constant is important.3.4.1 Upper BoundsSeveral terms are used <strong>to</strong> describe the running-time equation for an algorithm.These terms — <strong>and</strong> their associated symbols — indicate precisely what aspect ofthe algorithm’s behavior is being described. One is the upper bound for the growthof the algorithm’s running time. It indicates the upper or highest growth rate thatthe algorithm can have.To make any statement about the upper bound of an algorithm, we must bemaking it about some class of inputs of size n. We measure this upper boundnearly always on the best-case, average-case, or worst-case inputs. Thus, we cannotsay, “this algorithm has an upper bound <strong>to</strong> its growth rate of n 2 .” We must saysomething like, “this algorithm has an upper bound <strong>to</strong> its growth rate of n 2 in theaverage case.”Because the phrase “has an upper bound <strong>to</strong> its growth rate of f(n)” is long <strong>and</strong>often used when discussing algorithms, we adopt a special notation, called big-Ohnotation. If the upper bound for an algorithm’s growth rate (for, say, the worstcase) is f(n), then we would write that this algorithm is “in the set O(f(n))in theworst case” (or just “in O(f(n))in the worst case”). For example, if n 2 grows as

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!