13.07.2015 Views

Assessment and Future Directions of Nonlinear Model Predictive ...

Assessment and Future Directions of Nonlinear Model Predictive ...

Assessment and Future Directions of Nonlinear Model Predictive ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

State Estimation Analysed as Inverse Problem 345s.t. ẋ − Ax − w = u y − Cx =0is well-posed, i.e. ‖x − x δ ‖ H 1 ≤ c‖z − z δ ‖ (H1 )′ <strong>and</strong>, consequently, with a genericconstant c, max{|x 0 − x δ 0|, ‖w − w δ ‖ L2 }≤c‖z − z δ ‖ L2 <strong>and</strong>‖x − x δ ‖ C 0 ≤ c‖z − z δ ‖ L2 ≤ c‖z − z δ ‖ L∞ .4 Conclusions <strong>and</strong> Questions Concerning the AppropriateNormsObservability is like well-posedness a qualitative property. Condition numbersquantify the error propagation. We used this concept to define an observabilitymeasure. For linear systems we derived the use <strong>of</strong> the inverse <strong>of</strong> observabilityGramian G. With G −1 one can estimate the minimal difference in the outputswhich can be seen for given different initial data. Also we showed that regularization<strong>of</strong> initial data is not necessary for stability, hence the least squares formulationfor state estimation is well-posed. Regularization <strong>of</strong> initial data wouldlead to bias. However, a low observability measure leads to an ill-conditionedleast squares problem.Introducing linearly model error functions we leave the finite dimensional setting.For this case we stated the first order necessary condition <strong>and</strong> reduced themby several variables <strong>and</strong> equations. Analysing them for linear systems we showedthat the least squares problem formulation without regularizing the initial datais well-posed not only with respect to the L 2 -norm but also for the L ∞ -norm.However, the error propagation with respect to L 2 -errors may be large for low observabilitymeasures independent <strong>of</strong> the regularization parameter for the modelerrors. Considering L ∞ -errors the behaviour is different. Then, in case <strong>of</strong> onestate only, we have bounds independent <strong>of</strong> the stiffness <strong>of</strong> state equations.As seen it is fundamental to discuss in which norms the data errors arebounded <strong>and</strong> what output is <strong>of</strong> interest. In my opinion one has not only theL 2 -norm <strong>of</strong> the data error bounded, but one can assume ‖z δ ‖ L2(t 0,t 0+H) ≤ δ √ H<strong>and</strong> ‖z δ ‖ ∞ ≤ c. Hence, one has additional information about the error whichshould be taken into account, <strong>and</strong> the error would depend on the length <strong>of</strong> thehorizon. The question concerning the outputs depends on the application <strong>of</strong> stateestimation. Which output is <strong>of</strong> interest should be stated together with the problemformulation. For example, employing state estimation to obtain the currentstate required for the main issue <strong>of</strong> controling a process should have the focus onx(t 0 + H), the filtered state. Then the L 2 -norm <strong>of</strong> the state on [t 0 ,t 0 + H] islessadequate than measuring the error <strong>of</strong> x(t 0 + H). If one is interested on the stateover the whole horizon, also called smoothed state, one may consider a weightedL 2 -norm putting more weight on the current state than on the past state. Or, isthe L ∞ - norm over the whole horizon more adequate than the L 2 -norm? The answers<strong>of</strong> these question do not only influence the theoretical analysis but shouldalso affect the numerical studies. In particular, as soon as adaptivity concerningthe underlying discretization grid is introduced it is <strong>of</strong> greatest importance toknow which error shall be finally small to obtain greatest efficency.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!