19.07.2014 Views

Contents - Student subdomain for University of Bath

Contents - Student subdomain for University of Bath

Contents - Student subdomain for University of Bath

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

3.3. NONLINEAR MULTIVARIATE EQUATIONS: DISTRIBUTED 79<br />

x k+1 , . . . , x n in a Gröbner base computed with such an order are all that<br />

can be deduced about these variables. It is common, but by no means<br />

required, to use tdeg <strong>for</strong> both > ′ and > ′′ . Note that this is not the<br />

same as simply using tdeg, since the exponents <strong>of</strong> x k+1 , . . . , x n are not<br />

considered unless x 1 , . . . , x k gives a tie.<br />

weighted orderings Here we compute the total degree with a weighting factor,<br />

e.g. we may weight x twice as much as y, so that the total degree <strong>of</strong><br />

x i y j would be 2i + j. This can come in lexicographic or reverse lexicographic<br />

variants.<br />

matrix orderings These are in fact the most general <strong>for</strong>m <strong>of</strong> orderings [Rob85].<br />

Let M be a fixed n × n matrix <strong>of</strong> reals, and regard the exponents <strong>of</strong> A as<br />

an n-vector a. Then we compare A and B by computing the two vectors<br />

M.a and M.b, and comparing these lexicographically.<br />

lexicographic M is the identity matrix.<br />

⎛<br />

⎞<br />

1 1 . . . 1 1<br />

1 0 . . . 0 0<br />

. grlex M =<br />

0 .. 0 . . . 0<br />

.<br />

⎜<br />

⎟<br />

⎝ . . . . . ⎠<br />

0 . . . 0 1 0<br />

tdeg It<br />

⎛<br />

would be tempting to<br />

⎞<br />

say, by analogy with grlex, that the matrix<br />

1 1 . . . 1 1<br />

0 0 . . . 0 1<br />

is<br />

0 . . . 0 1 0<br />

⎜<br />

⎟<br />

⎝<br />

. . . . .<br />

⎠ . However, this is actually grlex with the<br />

0 1 0 . . . 0<br />

variable order reversed, not ⎛genuine reverse lexicographic. ⎞ To get<br />

1 1 . . . 1 1<br />

−1 0 . . . 0 0<br />

. that, we need the matrix<br />

0 .. 0 . . . 0<br />

⎜<br />

⎟<br />

, or, if we are<br />

⎝ . . . . . ⎠<br />

0 . . . 0 −1 0<br />

adopting<br />

⎛<br />

the Maple convention<br />

⎞<br />

<strong>of</strong> reversing the variables as well,<br />

1 1 . . . 1 1<br />

0 0 . . . 0 −1<br />

0 . . . 0 −1 0<br />

⎜<br />

⎟<br />

⎝<br />

. . . . .<br />

⎠ .<br />

0 −1 0 . . . 0<br />

k-elimination ( If the matrices ) are M k <strong>for</strong> > ′ and M n−k <strong>for</strong> > ′′ , then<br />

Mk 0<br />

M =<br />

.<br />

0 M n−k<br />

weighted orderings Here the first row <strong>of</strong> M corresponds to the weights,<br />

instead <strong>of</strong> being uni<strong>for</strong>mly 1.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!