19.07.2014 Views

Contents - Student subdomain for University of Bath

Contents - Student subdomain for University of Bath

Contents - Student subdomain for University of Bath

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1.5. SOME MAPLE 27<br />

Definition 18 (Following [BFSS06]) We say that an algorithm producing<br />

output <strong>of</strong> size N is optimal if its running time is O(N), and almost optimal if<br />

its running time is Õ(N).<br />

Addition <strong>of</strong> numbers is there<strong>for</strong>e optimal, and multiplication almost optimal.<br />

However, the reader should be warned that Õ expressions are <strong>of</strong>ten far from<br />

the reality experienced in computer algebra, where data are small enough that<br />

the limiting processes in equations (1.8) and (1.9) have not really taken hold<br />

(see note 4 ), or are quantised (in practice integer lengths are measured in words,<br />

not bits, <strong>for</strong> example).<br />

When it comes to measuring the intrinstic difficulty <strong>of</strong> a problem, rather<br />

than the efficiency <strong>of</strong> a particular algorithm, we need lower bounds rather than<br />

the upper bounds implied in (1.8) and (1.9).<br />

Notation 7 (Lower bounds) Consider a problem P , and a given encoding,<br />

e.g. “dense polynomials (Definition 22) with coefficients in binary”. Let N be<br />

the size <strong>of</strong> a problem instance, and C a particular computing paradigm, and way<br />

<strong>of</strong> counting operations. If we can prove that there is a c such that any algorithm<br />

solving this problem must take at least cf(N) operations on at least one problem<br />

instance <strong>of</strong> size N, then we say that this problem has cost at least <strong>of</strong> the order<br />

<strong>of</strong> f(n), written<br />

Again “=” really ought to be “∈”.<br />

P C = Ω(f(N)) or loosely P = Ω(f(N)). (1.10)<br />

In some instances (sorting is one <strong>of</strong> these) we can match upper and lower bounds.<br />

Notation 8 (Θ) If P C = Ω(f(N)) and P C = O(f(N)), then we say that P C is<br />

<strong>of</strong> order exactly f(N), and write P C = Θ(f(N)).<br />

For example if C is the paradigm in which we only count comparison operations,<br />

sorting N objects is Θ(N log N).<br />

Notation 9 (Further Abuse) We will sometimes abuse these notations further,<br />

and write, say, f(N) = 2 O(N) , which can be understood as either <strong>of</strong> the<br />

equivalent <strong>for</strong>ms log 2 f(N) = O(N) or ∃C ∈ R, M : ∀N > Mf(N) < 2 CN .<br />

Note that 2 O(N) and O(2 N ) are very different things. 4 N = 2 2N is 2 O(N) but<br />

not O(2 N ).<br />

1.5 Some Maple<br />

1.5.1 The RootOf construct<br />

Note that Maple indexes the reult <strong>of</strong> RootOf according to the rules at http://<br />

www.maples<strong>of</strong>t.com/support/help/Maple/view.aspx?path=RootOf/indexed.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!