19.07.2014 Views

Contents - Student subdomain for University of Bath

Contents - Student subdomain for University of Bath

Contents - Student subdomain for University of Bath

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

5.7. UNIVARIATE FACTORING SOLVED 161<br />

1. [Col79] pointed out that there were two obvious ways to code the “<strong>for</strong><br />

subsets<br />

∏<br />

T <strong>of</strong> S” loop: increasing cardinality <strong>of</strong> T and increasing degree <strong>of</strong><br />

g∈T<br />

g. He showed that, subject to two very plausible conjectures, the<br />

average number <strong>of</strong> products actually <strong>for</strong>med with the cardinality ordering<br />

was O(n 2 ), thus the average running time would be polynomial in n.<br />

2. [LLJL82] had a completely different approach to algorithm 29. They<br />

asked, <strong>for</strong> each d < n, “given f 1 ∈ S, what is the polynomial g <strong>of</strong> degree<br />

d which divides f over the integers and is divisible by f 1 modulo<br />

p k ?”. Un<strong>for</strong>tunately, answering this question needed a k far larger than<br />

that implied by the Landau–Mignotte bound, and the complexity, while<br />

polynomial in n, was O(n 12 ), at least while using classical arithmetic. This<br />

paper introduced the ‘LLL’ lattice reduction algorithm, which has many<br />

applications in computer algebra and far beyond.<br />

3. [ABD85] showed that, by a combination <strong>of</strong> simple divisibility tests and<br />

“early abort” trial division (Proposition 44) it was possible to make dramatic<br />

reductions, at the time up to four orders <strong>of</strong> magnitude, in the constant<br />

implied in the statement “exponential in r”.<br />

4. [ASZ00] much improved this, and the authors were able to eliminate whole<br />

swathes <strong>of</strong> possible T at one go.<br />

5. [vH02] reduces the problem <strong>of</strong> finding T to a ‘knapsack’ problem, which, as<br />

in method 2, is solved by LLL, but the lattices involved are much smaller<br />

— <strong>of</strong> dimension r rather than n. At the time <strong>of</strong> writing, this seems to<br />

be the best known method. His paper quoted a polynomial <strong>of</strong> degree<br />

n = 180, with r = 36 factors <strong>of</strong> degree 5 modulo p = 19, but factoring as<br />

two polynomials <strong>of</strong> degree 90 over the integers. This took 152 seconds to<br />

factor.<br />

Open Problem 11 (Evaluate [vH02] against [ASZ00]) How does the factorization<br />

algorithm <strong>of</strong> [vH02] per<strong>for</strong>m on the large factorizations successfully<br />

solved by [ASZ00]? Clearly the algorithm <strong>of</strong> [vH02] is asymptotically faster, but<br />

where is the cut-<strong>of</strong>f point? Note also that [vH02]’s example <strong>of</strong> x 128 − x 112 +<br />

x 80 − x 64 + x 48 − x 16 + 1 is in fact a disguised cyclotomic polynomial (as shown<br />

by the methods <strong>of</strong> [BD89]), being<br />

(<br />

x 240 − 1 ) ( x 16 − 1 )<br />

(x 80 − 1) (x 48 − 1) = ∏<br />

1 ≤ k < 15<br />

gcd(k, 15) = 1<br />

5.7 Univariate Factoring Solved<br />

(<br />

x 16 − e 2πik/15) .<br />

We can put together the components we have seen to deduce an algorithm<br />

(Figure 5.8) <strong>for</strong> factoring square-free polynomials over Z[x].

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!