20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

36 CHAPTER 3. SOLVING POLYNOMIAL SYSTEMS OF EQUATIONS<br />

interpreted as the x i -values of the zeros of the ideal I. A point (λ 1 ,...,λ n ) ∈ C n<br />

of eigenvalues of A T x 1<br />

,...,A T x n<br />

, respectively, corresponding <strong>to</strong> a common (non-zero)<br />

eigenvec<strong>to</strong>r v ∈ C n is also called a multi-eigenvalue of the n-tuple of commuting<br />

matrices (A T x 1<br />

,...,A T x n<br />

).<br />

The following theorem characterizes the variety V (I):<br />

Theorem 3.4. V (I) ={(λ 1 ,...,λ n ) ∈ C n : ∃v ∈ C n \{0} ∀i : A T x i · v = λ i · v}.<br />

Proof. See [48] for the proof of this theorem.<br />

□<br />

The eigenvec<strong>to</strong>rs v of the matrices A T x 1<br />

,...,A T x n<br />

are highly structured. If all<br />

the eigenvalues of these matrices have algebraic multiplicity one, then it is wellknown<br />

that A T x i<br />

= UΛL T , where L T U = I and U is a generalized Vandermonde<br />

matrix. The rows of U T consist of the ordered basis B evaluated at the points<br />

x = v(k) ∈ C n , k =1,...,N, where v(1),v(2),...,v(N) are the complex solutions<br />

<strong>to</strong> the system of equations (3.19). Furthermore Λ = diag(λ 1 ,...,λ N ) is a diagonal<br />

matrix with λ k = r(v(k)), k =1, 2,...,N. The matrix L has the property that its<br />

k-th column consists of the coefficients l jk of the Lagrange interpolation polynomials<br />

l k (x 1 ,...,x n )= ∑ N<br />

j=1 l j,kb(j), where b(j) denotes the j-th basis element in B, with<br />

the basic interpolation property that l k (v(k)) = 1 and l k (v(j)) = 0 for all j k. In<br />

linear algebra terms this says that L T = U −1 . The fact that the eigenvec<strong>to</strong>rs of A T x i<br />

are columns of the generalized Vandermonde matrix U was noted by Stetter, which<br />

is the reason that eigenvec<strong>to</strong>rs of this form are sometimes called Stetter vec<strong>to</strong>rs [81].<br />

The structure of these vec<strong>to</strong>rs can be used <strong>to</strong> read off the point v(k) from an eigenvec<strong>to</strong>r<br />

of a single matrix A T x i<br />

directly, without having <strong>to</strong> apply the eigenvec<strong>to</strong>rs <strong>to</strong> all<br />

the opera<strong>to</strong>rs A T x 1<br />

,...,A T x n<br />

, <strong>to</strong> obtain the corresponding eigenvalues. This is because<br />

the monomials x i , i =1,...,n, are all in the basis B, hence their values appear in<br />

the Stetter vec<strong>to</strong>r (see also [20]).<br />

The method of rewriting the problem of finding solutions of a set of polynomial<br />

equations in<strong>to</strong> an eigenvalue problem of a set of commuting matrices, is called the<br />

Stetter-Möller matrix method and can only be applied <strong>to</strong> systems of polynomial equations<br />

which generate a zero-dimensional ideal. This method is described in [81] and<br />

a similar approach can be found in [50] and [48] in which the matrix method was<br />

derived independently of the work of Stetter and Möller. For a further background<br />

on the constructive algebra and systems theory aspects of this approach, see also [47].<br />

3.4 Counting real solutions<br />

For some specific applications only real solutions are of importance. Therefore it can<br />

be convenient <strong>to</strong> have information about the number of real and complex solutions<br />

of an ideal I generated by polynomials in R[x 1 ,...,x n ] of a system of equations in<br />

Gröbner basis form. The numbers of real and complex solutions can be computed by

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!