20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

198 CHAPTER 11. H 2 MODEL-ORDER REDUCTION FROM ORDER N TO N-3<br />

achieved by applying the linearization technique of Section 10.2 twice on each of the<br />

matrices Ãã N−1<br />

(ρ 1 ,ρ 2 ) T and Ãã N−2<br />

(ρ 1 ,ρ 2 ) T . Such an approach is given in Appendix<br />

A including an example <strong>to</strong> demonstrate the procedure.<br />

By applying the technique of Appendix A <strong>to</strong> linearize (11.3), we arrive at the<br />

equivalent linear two-parameter eigenvalue problem:<br />

⎧<br />

⎨ (B 1 + ρ 1 C 1 + ρ 2 D 1 ) w =0<br />

(11.4)<br />

⎩<br />

(B 2 + ρ 1 C 2 + ρ 2 D 2 ) w =0<br />

where B i , C i , D i (i =1, 2) are large structured matrices of dimensions (N − 1) 2 2 N ×<br />

(N − 1) 2 2 N . The eigenvec<strong>to</strong>r w in (11.4) is built up from products of the original<br />

eigenvec<strong>to</strong>r v and powers of ρ 1 and ρ 2 .<br />

In the literature on solving two-parameter linear eigenvalue problems (see, e.g.,<br />

[57], [59], [60]) commonly the problem with distinct eigenvec<strong>to</strong>rs is addressed:<br />

⎧<br />

⎨ (X 1 + ρ 1 Y 1 + ρ 2 Z 1 )w 1 =0<br />

(11.5)<br />

⎩<br />

(X 2 + ρ 1 Y 2 + ρ 2 Z 2 )w 2 =0<br />

Therefore this theory can not be applied directly <strong>to</strong> our problem. Moreover and<br />

more importantly, most of this literature deals with iterative solvers <strong>to</strong> compute only<br />

some eigenvalues, whereas we need all the eigenvalues of the two-parameter eigenvalue<br />

problem (11.3). No literature is available on linear two-parameter eigenvalue problems<br />

involving a common eigenvec<strong>to</strong>r w.<br />

11.3 The Kronecker canonical form of a two-parameter matrix pencil<br />

Due <strong>to</strong> the linearization technique, the first (N − 1) 2 − 1 block rows capture the<br />

structure imposed on the eigenvec<strong>to</strong>r w and the last block row represents the equations<br />

of the original non-linear two-parameter eigenvalue problem. Therefore, the first<br />

((N − 1) 2 − 1)2 N equations of (B 1 + ρ 1 C 1 + ρ 2 D 1 )w = 0, are identical <strong>to</strong> the first<br />

((N − 1) 2 − 1)2 N equations represented by (B 2 + ρ 1 C 2 + ρ 2 D 2 )w =0.<br />

When all the equations represented by (11.4) are written as one matrix-vec<strong>to</strong>r<br />

product with the eigenvec<strong>to</strong>r w, this yields:<br />

[ ]<br />

B1 + ρ 1 C 1 + ρ 2 D 2<br />

w =0. (11.6)<br />

B 2 + ρ 1 C 2 + ρ 2 D 2<br />

where the involved matrix contains ((N − 1) 2 − 1)2 N duplicate rows. When all<br />

duplicates are removed and only the unique rows are maintained, an equivalent linear<br />

non-square eigenvalue problem arises:<br />

(B + ρ 1 C + ρ 2 D) w = 0 (11.7)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!