13.07.2015 Views

DOTcvpSB: a Matlab Toolbox for Dynamic Optimization in Systems ...

DOTcvpSB: a Matlab Toolbox for Dynamic Optimization in Systems ...

DOTcvpSB: a Matlab Toolbox for Dynamic Optimization in Systems ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>DOTcvpSB</strong>: a <strong>Matlab</strong> <strong>Toolbox</strong> <strong>for</strong> <strong>Dynamic</strong> <strong>Optimization</strong> <strong>in</strong> <strong>Systems</strong> Biologyconstra<strong>in</strong>ts (2.4) if a m<strong>in</strong>imum problem is considered. It follows a general NLP <strong>for</strong>mulation given bym<strong>in</strong>yJ 0 (3.7)s.t. J l (y) = 0, l = 1,m e (3.8)J l (y) ≤ 0, l = m e +1,m e +m i (3.9)where the problem is def<strong>in</strong>ed as to f<strong>in</strong>d the feasible optimal decision variables (y optimal ) so that J 0 (y optimal ) ≤J 0 (y). As mentioned be<strong>for</strong>e some optimization methods may require cost and constra<strong>in</strong>ts gradient <strong>in</strong><strong>for</strong>mationwith respect to the vector of decision variables (3.6). To compute such gradients two methods were implemented:sensitivity equations and f<strong>in</strong>ite difference. These method are described <strong>in</strong> detail <strong>in</strong> the next subsections.3.2 IMPLEMENTED GRADIENT METHODS3.2.1 Sensitivity equationsThe sensitivities [35] are def<strong>in</strong>ed as partial derivation of state variables with respect to decision variables.Then the sensitivity coefficients s j (t) with <strong>in</strong>itial conditions are def<strong>in</strong>ed as followss j (t) = ∂x(t)∂y j, s j (0) = 0, j = 1,n y (3.10)where n y denotes the number of decision variables.The sensitivity coefficients conta<strong>in</strong> <strong>in</strong><strong>for</strong>mation about the sensitivities of the state values to the decisionvariables. The partial derivative of ODE (2.1) with respect to decision variables (3.6) gives( ) ∂fT T∂fTṡ j (t) = s j (t)+(∂x ∂u) T ( )∂ui+1 ∂fT T∂p+(3.11)∂y j ∂p ∂y jwhere t i−1 ≤ t ≤ t i and i = 0,(N −1). With the <strong>for</strong>ward <strong>in</strong>tegration of sensitivity equations (3.11) we obta<strong>in</strong>the necessary <strong>in</strong><strong>for</strong>mation to compute the cost function (2.3) gradient.When the sensitivity is computed with respect to time <strong>in</strong>terval t i the discont<strong>in</strong>uity must be taken <strong>in</strong>to account.If the situation is considered when the state values are cont<strong>in</strong>uous (2.5) at the time <strong>in</strong>terval boundaries thenthe total differential <strong>for</strong> a state variable givesdx(t i ) ={δx(t+i )+ẋ(t+ i )dt i, i = 1,(N −1)δx(t − i )+ẋ(t− i )dt i(3.12)as well as dx(t + i ) = dx(t− i ) which givesδx(t + i )+ẋ(t+ i )dt i = δx(t − i )+ẋ(t− i )dt i (3.13)Differentiat<strong>in</strong>g (3.13) with respect to the decision variables yields∂x(t + i ) = ∂x(t− i ) + [ ẋ(t − i∂y j ∂y )−ẋ(t+ i )] ∂t i(3.14)j ∂y jWhen the sensitivity coefficients (3.10) are used, <strong>for</strong>mula (3.14) is simplifieds j (t + i ) = s j(t − i )+[f i −f i+1 ] ti∂t i∂y j(3.15)The partial derivations of the cost function or constra<strong>in</strong>ts (2.3) with respect to optimized variables gives( ) ( ) T∂J l ∂Jl ∂tF ∂Jl ∂x F= + +∂y j ∂t F ∂y j ∂x F ∂y j( ) T ∂Jl ∂u+∂u ∂y jwhere it is considered that the variation of x F follow<strong>in</strong>g (3.12) gives( ) T ∂Jl ∂p(3.16)∂p ∂y j∂x F∂y j= f(t F ,x F ,u F ,p) ∂t F∂y j+ ∂x F∂y j(3.17)Page – 14

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!