06.02.2013 Views

Corynebacterium glutamicum - JUWEL - Forschungszentrum Jülich

Corynebacterium glutamicum - JUWEL - Forschungszentrum Jülich

Corynebacterium glutamicum - JUWEL - Forschungszentrum Jülich

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

2.5. Experimental Design<br />

For the current theoretical explanations, it may be more convenient to work with the<br />

covariance matrix. In practice, however, one might want to avoid the need for calculation<br />

of the inverse of a matrix.<br />

The determinant of M is always zero or positive. If the determinant of M is zero,<br />

not all parameters can be estimated. This happens, for instance, when the amount of<br />

measurements is less then the amount of parameters. This would lead to a jacobian J<br />

with a lower rank than the amount of parameters. Another characteristic of M, which<br />

is mentioned here because it may be helpful in understanding the algorithms, is the<br />

additive character:<br />

M(ξ1+2) =M(ξ1)+M(ξ2) (2.75)<br />

assuming both experiments to be independent, with ξ indicating an experiment, M(ξ1),<br />

the information matrix according to experiment 1, M(ξ2), the information matrix according<br />

to experiment 2 and M(ξ1+2), the information matrix when the measurements<br />

of both experiments are regarded together (still being independent from each other).<br />

More detailed mathematical information on the information matrix or the covariance<br />

matrix can also be found in textbooks such as Bandemer and Bellmann (1994).<br />

In order to understand the experimental design criteria for parameter estimation, it<br />

can be very helpful to try to visualize the meaning of the covariance matrix of the<br />

parameters.<br />

Assuming normally distributed measurement errors, the probability density function<br />

for a set of parameters θ close to the estimated parameters ˆ θ canbeformulatedas<br />

p(θ) =(2π) −r/2 (det Covˆ θ ) −1/2 �<br />

exp − 1<br />

2 (ˆ θ − θ) T Cov −1<br />

ˆθ (ˆ �<br />

θ − θ) . (2.76)<br />

with r being the number of parameters. This probability density function is constant<br />

when<br />

( ˆ θ − θ) T Cov −1<br />

ˆθ (ˆ θ − θ) =constant (2.77)<br />

This function 2.77 describes the surface of an r-dimensional ellipsoid with the parameter<br />

sets inside this ellipsoid all having a higher probability than the sets outside. For the<br />

simple case with only two parameters (r = 2), the ellipsoid becomes an ellipse which can<br />

easily be visualized as in figure 2.4.<br />

The more accurate the parameter estimates are, the smaller this ellipsoid will be. Several<br />

criteria have been suggested which aim at minimizing different aspects of the size of<br />

the ellipsoid of which some will be mentioned here. More details can also be found<br />

in reviews and textbooks such as Atkinson (1982); Bandemer and Bellmann (1994);<br />

Walter and Pronzato (1990).<br />

An often used criterion for parameter estimating experimental designs is to aim at the<br />

minimization of the expected volume inside the ellipsoid. Except for a constant factor,<br />

this volume equals � det Covˆ θ . Experimental designs which aim at minimizing this are<br />

37

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!