Author's personal copy3398 S. Blanes, E. Ponsoda / Journal of Computational and Applied Mathematics 236 (2012) 3394–3408Let us consider the <strong>linear</strong> homogeneous equationY ′ (t) = M(t)Y(t), Y(t 0 ) = I n , (17)with Y(t), M(t) ∈ C n×n , and denote the fundamental matrix solution by Φ(t, t 0 ). If we consider that the solution can bewritten in the exponential <strong>for</strong>m, Φ(t, t 0 ) = exp(Ω(t, t 0 )), where Ω(t, t 0 ) = ∞Ω n=1 n(t, t 0 ), we obtain the <strong>Magnus</strong> seriesexpansion [28] whereΩ 1 (t, t 0 ) = tΩ 2 (t, t 0 ) = 1 2t 0M(t 1 ) dt 1 , tt 0dt 1 t1t 0dt 2 [M(t 1 ), M(t 2 )] , . . . .Here [A, B] ≡ AB − BA is the matrix commutator of A and B. We denote the truncated expansion by kΨ [k] (t, t 0 ) = exp Ω i (t, t 0 ) . (19)i=1The first order approximation agrees with the first order approximation <strong>for</strong> most exponential methods like e.g. the Fer [29]or Wilcox [30] expansions, but they differ at higher orders. In general, exponential methods (and most expansions) converge<strong>for</strong> tt 0∥M(t 1 )∥ dt 1 < ξ, (20)where ξ is a constant depending on the method. For the <strong>Magnus</strong> expansion (ME), we have that, <strong>for</strong> the 2-norm, ξ = π, beingthis a sufficient but not necessary condition [31] (see also [11] and the references therein).(18)3.1. <strong>Magnus</strong> expansion <strong>for</strong> the non-homogeneous problemSuppose the game can be influenced by external <strong>for</strong>cesx ′ (t) = A(t)x(t) + B 1 (t)u 1 (t) + B 2 (t)u 2 (t) + z(t), x(0) = x 0 ,where z(t) ∈ R n is the uncertainty, see [32]. This problem can be solved using most results with minor modifications. Noticethat this problem is equivalent toy ′ (t) = Ā(t)y(t) + ¯B1 (t)u 1 (t) + ¯B2 (t)u 2 (t), y(0) = y 0 ,where y = [x, 1] T ∈ R n+1 and A(t) z(t)Ā(t) =, ¯B1 (t) =0 0 B1 (t), ¯B2 (t) =0 B2 (t).0Then, the methods previously considered can be used.On the other hand, we have seen that a matrix RDE can be written as a <strong>linear</strong> system. However, coupled RDEs whereR ij ≠ 0 do not admit this <strong>linear</strong> <strong>for</strong>m. For this reason, it is convenient to consider the generalization of the ME to thenon<strong>linear</strong> case.3.2. The <strong>Magnus</strong> expansion <strong>for</strong> non<strong>linear</strong> systemsLet us consider the non<strong>linear</strong> and non-autonomous equationz ′ = f (t, z), z(t 0 ) = z 0 ∈ R n , (21)where the evolution operator (z(t) = Φ t (z 0 )) satisfies the operational equationddt Φt = Φ t L f (t,y) , y = z 0 , (22)with solution Φ t (y)| y=z0 . Here, L f = f · ∇ y , with f = [f 1 , f 2 , . . . , f n ] T , is the Lie derivative (or Lie operator) associated withf , acting on differentiable functions F : R n −→ R m asL f F(z) = F ′ (z)f (z),where F ′ (y) denotes the Jacobian matrix of F (see [11] <strong>for</strong> more details).
Author's personal copyS. Blanes, E. Ponsoda / Journal of Computational and Applied Mathematics 236 (2012) 3394–3408 3399Since L f is a <strong>linear</strong> operator, we can then use directly the <strong>Magnus</strong> series expansion to obtain the <strong>for</strong>mal solution of (22)<strong>for</strong> t ∈ [t 0 , T] as Φ T = exp(L ω(z0 )), with ω = i ω i. The first two terms are nowω 1 (z 0 ) = Tω 2 (z 0 ) = − 1 2t 0f (s, z 0 ) ds, (23) Tt 0ds 1 s1t 0ds 2 (f (s 1 , z 0 ), f (s 2 , z 0 )), (24)where (f , g) denotes the Lie bracket with components (f , g) i = f · ∇ z g i (z) − g · ∇ z f i (z). Observe that <strong>for</strong> the integrals (23)and (24) z 0 , on the vector field, is frozen (on the Lie derivatives, one has to compute the derivatives with respect to z and toreplace them by z 0 ). If the series converges, the exact solution, z(T) = Φ T (z 0 ) can be approximated by the 1-flow solutionof the autonomous <strong>differential</strong> equationy ′ = ω [n] (y), y(0) = z 0 , (25)ω [n] = niω i , i.e. y(1) ≃ z(T) (see also [11] and references therein <strong>for</strong> more details). Notice that the ME can be considered asan average on the explicitly time-dependent functions appearing on the vector field. It still remains to solve the autonomousproblem (25). A simple method (useful when an efficient algorithm is known <strong>for</strong> the problem (21) when the time is frozen)will be presented.Example 1. As an illustrative example, let us consider the scalar non<strong>linear</strong> and non-autonomous equationz ′ = α(t)z k + β(t)z m , z(t 0 ) = z 0 ,then Eqs. (23) and (24) provideω 1 (z 0 ) = a z k 0 + b zm 0 ,where the constants a, b, c are given bya = Tt 0α(s) ds, b =c = − m − k2 Tt 0ds 1 s1ω 2(z 0 ) = c z k+m−10, Tt 0β(s) ds,t 0ds 2 (α(s 1 )β(s 2 ) − β(s 1 ), α(s 2 )).Then, the solution, y(1), of the autonomous (solvable) equationy ′ = ay k + by m + c y k+m−1 , y(t 0 ) = z 0 ,is the approximation to z(T) by the second order <strong>Magnus</strong> expansion. If k = 1, m = 2 this is a RDE and the <strong>Magnus</strong>approximation can be seen as the exact solution of another autonomous RDE where the time-dependent function α(t) isreplaced by a and β(t) by b + c.3.3. <strong>Magnus</strong> <strong>integrators</strong>In those cases where it is not feasible to obtain analytical solutions, the numerical integration can be the choice. The timeinterval is divided on a mesh, t 0 < t 1 < · · · < t N−1 < t N = T , and an appropriate scheme is used on each interval. Forsimplicity in the presentation, we consider a constant time step, h = (T − t 0 )/N and t n = t 0 + nh, n = 0, 1, . . . , N. In thefollowing, we consider <strong>Magnus</strong> <strong>integrators</strong>. To this purpose, it is useful to notice that <strong>for</strong> the truncated expansion (19)Ψ [1] (t + h, t) = Φ(t + h, t) + O(h 3 ), Ψ [2] (t + h, t) = Φ(t + h, t) + O(h 5 ).If the integrals in (18) cannot be computed analytically, we can approximate them by a quadrature rule, and this can be doneby computing the matrix M(t) on the points of a single quadrature rule [14] (see also [11,12]).In some cases, the matrix M(t) is explicitly known, but in some other cases it is only known on a mesh. Then, in orderto present methods which can be easily adapted <strong>for</strong> different quadrature rules we introduce the averaged (or generalizedmomentum) matricesM (i) (h) ≡ 1 tn +h i 1 h/2t − t1/2 M(t) dt = t i M(t + th i t nh i 1/2 ) dt, (26)−h/2<strong>for</strong> i = 0, 1, . . . with t 1/2 = t n + h/2. Then, it is clear that Ψ [1] (t + h, t) = e M(0) (h) <strong>for</strong> the second order method, butfourth-order methods can be obtained by takingΨ [2]1 (t + h, t) = exp M (0) (h) + [M (1) (h), M (0) (h)] . (27)