12.07.2015 Views

v2007.09.17 - Convex Optimization

v2007.09.17 - Convex Optimization

v2007.09.17 - Convex Optimization

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

558 APPENDIX D. MATRIX CALCULUSwhich converts a Hadamard product into a standard matrix product.Hadamard product can be extracted from within the Kronecker product.[150, p.475]D.1.3Chain rules for composite matrix-functionsGiven dimensionally compatible matrix-valued functions of matrix variablef(X) and g(X) [161,15.7]∇ X g ( f(X) T) = ∇ X f T ∇ f g (1563)∇ 2 X g( f(X) T) = ∇ X(∇X f T ∇ f g ) = ∇ 2 X f ∇ f g + ∇ X f T ∇ 2f g ∇ Xf (1564)D.1.3.1Two arguments∇ X g ( f(X) T , h(X) T) = ∇ X f T ∇ f g + ∇ X h T ∇ h g (1565)D.1.3.1.1 Example. Chain rule for two arguments. [30,1.1]∇ x g ( f(x) T , h(x) T) =g ( f(x) T , h(x) T) = (f(x) + h(x)) T A (f(x) + h(x)) (1566)[ ] [ ]x1εx1f(x) = , h(x) =(1567)εx 2 x 2∇ x g ( f(x) T , h(x) T) =[ 1 00 ε][ ε 0(A +A T )(f + h) +0 1[ 1 + ε 00 1 + ε](A +A T )(f + h)] ([ ] [ ])(A +A T x1 εx1) +εx 2 x 2(1568)(1569)from Table D.2.1.lim ∇ xg ( f(x) T , h(x) T) = (A +A T )x (1570)ε→0These formulae remain correct when gradient produces hyperdimensionalrepresentation:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!