12.07.2015 Views

v2007.09.17 - Convex Optimization

v2007.09.17 - Convex Optimization

v2007.09.17 - Convex Optimization

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.1. CONVEX FUNCTION 2053.1.8.0.1 Example. Projection on a rank-1 subset.For A∈ S N having eigenvalues λ(A)= [λ i ]∈ R N , consider the unconstrainednonconvex optimization that is a projection on the rank-1 subset (2.9.2.1)of positive semidefinite cone S N + : Defining λ 1 ∆ = maxi{λ(A) i } andcorresponding eigenvector v 1minimizex‖xx T − A‖ 2 F = minimize tr(xx T (x T x) − 2Axx T + A T A)x{‖λ(A)‖ 2 , λ 1 ≤ 0=‖λ(A)‖ 2 − λ 2 (1475)1 , λ 1 > 0arg minimizex‖xx T − A‖ 2 F ={0 , λ1 ≤ 0v 1√λ1 , λ 1 > 0(1476)From (1563) andD.2.1, the gradient of ‖xx T − A‖ 2 F is∇ x((x T x) 2 − 2x T Ax ) = 4(x T x)x − 4Ax (493)Setting the gradient to 0Ax = x(x T x) (494)is necessary for optimal solution. Replace vector x with a normalizedeigenvector v i of A∈ S N , corresponding to a positive eigenvalue λ i , scaledby square root of that eigenvalue. Then (494) is satisfiedx ← v i√λi ⇒ Av i = v i λ i (495)xx T = λ i v i viT is a rank-1 matrix on the positive semidefinite cone boundary,and the minimum is achieved (7.1.2) when λ i =λ 1 is the largest positiveeigenvalue of A . If A has no positive eigenvalue, then x=0 yields theminimum.For any differentiable multidimensional convex function, zero gradient∇f = 0 is a necessary and sufficient condition for its unconstrainedminimization [46,5.5.3]:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!