13.07.2015 Views

Magnus integrators for solving linear-quadratic differential ... - UPV

Magnus integrators for solving linear-quadratic differential ... - UPV

Magnus integrators for solving linear-quadratic differential ... - UPV

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Author's personal copy3398 S. Blanes, E. Ponsoda / Journal of Computational and Applied Mathematics 236 (2012) 3394–3408Let us consider the <strong>linear</strong> homogeneous equationY ′ (t) = M(t)Y(t), Y(t 0 ) = I n , (17)with Y(t), M(t) ∈ C n×n , and denote the fundamental matrix solution by Φ(t, t 0 ). If we consider that the solution can bewritten in the exponential <strong>for</strong>m, Φ(t, t 0 ) = exp(Ω(t, t 0 )), where Ω(t, t 0 ) = ∞Ω n=1 n(t, t 0 ), we obtain the <strong>Magnus</strong> seriesexpansion [28] whereΩ 1 (t, t 0 ) = tΩ 2 (t, t 0 ) = 1 2t 0M(t 1 ) dt 1 , tt 0dt 1 t1t 0dt 2 [M(t 1 ), M(t 2 )] , . . . .Here [A, B] ≡ AB − BA is the matrix commutator of A and B. We denote the truncated expansion by kΨ [k] (t, t 0 ) = exp Ω i (t, t 0 ) . (19)i=1The first order approximation agrees with the first order approximation <strong>for</strong> most exponential methods like e.g. the Fer [29]or Wilcox [30] expansions, but they differ at higher orders. In general, exponential methods (and most expansions) converge<strong>for</strong> tt 0∥M(t 1 )∥ dt 1 < ξ, (20)where ξ is a constant depending on the method. For the <strong>Magnus</strong> expansion (ME), we have that, <strong>for</strong> the 2-norm, ξ = π, beingthis a sufficient but not necessary condition [31] (see also [11] and the references therein).(18)3.1. <strong>Magnus</strong> expansion <strong>for</strong> the non-homogeneous problemSuppose the game can be influenced by external <strong>for</strong>cesx ′ (t) = A(t)x(t) + B 1 (t)u 1 (t) + B 2 (t)u 2 (t) + z(t), x(0) = x 0 ,where z(t) ∈ R n is the uncertainty, see [32]. This problem can be solved using most results with minor modifications. Noticethat this problem is equivalent toy ′ (t) = Ā(t)y(t) + ¯B1 (t)u 1 (t) + ¯B2 (t)u 2 (t), y(0) = y 0 ,where y = [x, 1] T ∈ R n+1 and A(t) z(t)Ā(t) =, ¯B1 (t) =0 0 B1 (t), ¯B2 (t) =0 B2 (t).0Then, the methods previously considered can be used.On the other hand, we have seen that a matrix RDE can be written as a <strong>linear</strong> system. However, coupled RDEs whereR ij ≠ 0 do not admit this <strong>linear</strong> <strong>for</strong>m. For this reason, it is convenient to consider the generalization of the ME to thenon<strong>linear</strong> case.3.2. The <strong>Magnus</strong> expansion <strong>for</strong> non<strong>linear</strong> systemsLet us consider the non<strong>linear</strong> and non-autonomous equationz ′ = f (t, z), z(t 0 ) = z 0 ∈ R n , (21)where the evolution operator (z(t) = Φ t (z 0 )) satisfies the operational equationddt Φt = Φ t L f (t,y) , y = z 0 , (22)with solution Φ t (y)| y=z0 . Here, L f = f · ∇ y , with f = [f 1 , f 2 , . . . , f n ] T , is the Lie derivative (or Lie operator) associated withf , acting on differentiable functions F : R n −→ R m asL f F(z) = F ′ (z)f (z),where F ′ (y) denotes the Jacobian matrix of F (see [11] <strong>for</strong> more details).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!