12.07.2015 Views

A Practical Introduction to Data Structures and Algorithm Analysis

A Practical Introduction to Data Structures and Algorithm Analysis

A Practical Introduction to Data Structures and Algorithm Analysis

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Sec. 7.9 Lower Bounds for Sorting 267arrays, optimized Quicksort performs well because it does one partition step beforecalling Insertion Sort. Compared <strong>to</strong> the other O(n log n) sorts, unoptimizedHeapsort is quite slow due <strong>to</strong> the overhead of the class structure. When all of thisis stripped away <strong>and</strong> the algorithm is implemented <strong>to</strong> manipulate an array directly,it is still somewhat slower than mergesort. In general, optimizing the various algorithmsmakes a noticeable improvement for larger array sizes.Overall, Radix Sort is a surprisingly poor performer. If the code had been tuned<strong>to</strong> use bit shifting of the key value, it would likely improve substantially; but thiswould seriously limit the range of element types that the sort could support.7.9 Lower Bounds for SortingThis book contains many analyses for algorithms. These analyses generally definethe upper <strong>and</strong> lower bounds for algorithms in their worst <strong>and</strong> average cases. Formost of the algorithms presented so far, analysis is easy. This section considersa more difficult task — an analysis for the cost of a problem as opposed <strong>to</strong> analgorithm. The upper bound for a problem can be defined as the asymp<strong>to</strong>tic cost ofthe fastest known algorithm. The lower bound defines the best possible efficiencyfor any algorithm that solves the problem, including algorithms not yet invented.Once the upper <strong>and</strong> lower bounds for the problem meet, we know that no futurealgorithm can possibly be (asymp<strong>to</strong>tically) more efficient.A simple estimate for a problem’s lower bound can be obtained by measuringthe size of the input that must be read <strong>and</strong> the output that must be written. Certainlyno algorithm can be more efficient than the problem’s I/O time. From this we seethat the sorting problem cannot be solved by any algorithm in less than Ω(n) timebecause it takes at least n steps <strong>to</strong> read <strong>and</strong> write the n values <strong>to</strong> be sorted. Basedon our current knowledge of sorting algorithms <strong>and</strong> the size of the input, we knowthat the problem of sorting is bounded by Ω(n) <strong>and</strong> O(n log n).Computer scientists have spent much time devising efficient general-purposesorting algorithms, but no one has ever found one that is faster than O(n log n) inthe worst or average cases. Should we keep searching for a faster sorting algorithm?Or can we prove that there is no faster sorting algorithm by finding a tighter lowerbound?This section presents one of the most important <strong>and</strong> most useful proofs in computerscience: No sorting algorithm based on key comparisons can possibly befaster than Ω(n log n) in the worst case. This proof is important for three reasons.First, knowing that widely used sorting algorithms are asymp<strong>to</strong>tically optimal is reassuring.In particular, it means that you need not bang your head against the wallsearching for an O(n) sorting algorithm (or at least not one in any way based on key

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!