12.07.2015 Views

v2007.11.26 - Convex Optimization

v2007.11.26 - Convex Optimization

v2007.11.26 - Convex Optimization

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 585From (1635), the simpler case, where the real function g(X) : R K →R hasvector argument,Y T ∇ 2 Xd2g(X+ t Y )Y =dt2g(X+ t Y ) (1652)D.1.8.2.1 Example. Second-order gradient.Given real function g(X) = log detX having domain int S K + , we want tofind ∇ 2 g(X)∈ R K×K×K×K . From the tables inD.2,h(X) = ∆ ∇g(X) = X −1 ∈ int S K + (1653)so ∇ 2 g(X)=∇h(X). By (1640) and (1608), for Y ∈ S Ktr ( ∇h mn (X) T Y ) =d dt∣ h mn (X+ t Y ) (1654)( t=0)d =dt∣∣ h(X+ t Y ) (1655)t=0 mn( )d =dt∣∣ (X+ t Y ) −1 (1656)t=0mn= − ( X −1 Y X −1) mn(1657)Setting Y to a member of {e k e T l ∈ R K×K | k,l=1... K} , and employing aproperty (32) of the trace function we find∇ 2 g(X) mnkl = tr ( ∇h mn (X) T e k e T l)= ∇hmn (X) kl = − ( X −1 e k e T l X−1) mn(1658)∇ 2 g(X) kl = ∇h(X) kl = − ( X −1 e k e T l X −1) ∈ R K×K (1659)From all these first- and second-order expressions, we may generate newones by evaluating both sides at arbitrary t (in some open interval) but onlyafter the differentiation.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!