12.07.2015 Views

v2007.11.26 - Convex Optimization

v2007.11.26 - Convex Optimization

v2007.11.26 - Convex Optimization

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 575in magnitude and direction to Y . D.3 Hence the directional derivative,⎡⎤dg 11 (X) dg 12 (X) · · · dg 1N (X)→Ydg(X) =∆ dg 21 (X) dg 22 (X) · · · dg 2N (X)⎢⎥∈ R M×N⎣ . . . ⎦∣dg M1 (X) dg M2 (X) · · · dg MN (X)⎡= ⎢⎣⎡∣dX→Ytr ( ∇g 11 (X) T Y ) tr ( ∇g 12 (X) T Y ) · · · tr ( ∇g 1N (X) T Y ) ⎤tr ( ∇g 21 (X) T Y ) tr ( ∇g 22 (X) T Y ) · · · tr ( ∇g 2N (X) T Y )⎥...tr ( ∇g M1 (X) T Y ) tr ( ∇g M2 (X) T Y ) · · · tr ( ∇g MN (X) T Y ) ⎦∑k,l∑=k,l⎢⎣ ∑k,l∂g 11 (X)∂X klY kl∑k,l∂g 21 (X)∂X klY kl∑k,l.∂g M1 (X)∑∂X klY klk,l∂g 12 (X)∂X klY kl · · ·∂g 22 (X)∂X klY kl · · ·.∂g M2 (X)∂X klY kl · · ·∑k,l∑k,l∑k,l∂g 1N (X)∂X klY kl∂g 2N (X)∂X klY kl.∂g MN (X)∂X kl⎤⎥⎦Y kl(1606)from which it follows→Ydg(X) = ∑ k,l∂g(X)∂X klY kl (1607)Yet for all X ∈ domg , any Y ∈R K×L , and some open interval of t∈ Rg(X+ t Y ) = g(X) + t →Ydg(X) + o(t 2 ) (1608)which is the first-order Taylor series expansion about X . [161,18.4][104,2.3.4] Differentiation with respect to t and subsequent t-zeroingisolates the second term of expansion. Thus differentiating and zeroingg(X+ t Y ) in t is an operation equivalent to individually differentiating andzeroing every entry g mn (X+ t Y ) as in (1605). So the directional derivativeof g(X) : R K×L →R M×N in any direction Y ∈ R K×L evaluated at X ∈ domgbecomes→Ydg(X) = d dt∣ g(X+ t Y ) ∈ R M×N (1609)t=0D.3 Although Y is a matrix, we may regard it as a vector in R KL .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!