08.07.2015 Views

Nonnegativity Constraints in Numerical Analysis - CiteSeer

Nonnegativity Constraints in Numerical Analysis - CiteSeer

Nonnegativity Constraints in Numerical Analysis - CiteSeer

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

the active set method fnnls. It also enables their method to <strong>in</strong>corporate multiple activeconstra<strong>in</strong>ts at each iteration. By employ<strong>in</strong>g non-diagonal gradient scal<strong>in</strong>g, PQN-NNLSovercomes some of the deficiencies of a projected gradient method such as slow convergenceand zigzagg<strong>in</strong>g. An important characteristic of PQN-NNLS algorithm is that despite theefficiencies, it still rema<strong>in</strong>s relatively simple <strong>in</strong> comparison with other optimization-orientedalgorithms. Also <strong>in</strong> this paper, Kim et al. gave experiments to show that their methodoutperforms other standard approaches to solv<strong>in</strong>g the NNLS problem, especially for largescaleproblems.Algorithm PQN-NNLS:Input: A ∈ R m×n , b ∈ R mOutput: x ∗ ≥ 0 such that x ∗ = arg m<strong>in</strong> ‖Ax − b‖ 2 .Initialization: x 0 ∈ R n +, S 0 ← I and k ← 0repeat1. Compute fixed variable set I k = {i : x k i = 0, [∇f(x k )] i > 0}2. Partition x k = [y k ; z k ], where y k i /∈ I k and z k i ∈ I k3. Solve equality-constra<strong>in</strong>ed subproblem:3.1. F<strong>in</strong>d appropriate values for α k and β k3.2. γ k (β k ; y k ) ← P(y k − β k ¯Sk ∇f(y k ))3.3. ỹ ← y k + α(γ k (β k ; y k ) − y k )4. Update gradient scal<strong>in</strong>g matrix S k to obta<strong>in</strong> S k+15. Update x k+1 ← [ỹ; z k ]6. k ← k + 1until Stopp<strong>in</strong>g criteria are met.Sequential Coord<strong>in</strong>ate-wise Algorithm for NNLSIn [29], the authors propose a novel sequential coord<strong>in</strong>ate-wise (SCA) algorithm whichis easy to implement and it is able to cope with large scale problems. They also derivestopp<strong>in</strong>g conditions which allow control of the distance of the solution found to the optimalone <strong>in</strong> terms of the optimized objective function. The algorithm produces a sequence ofvectors x 0 , x 1 , . . . , x t which converges to the optimal x ∗ . The idea is to optimize <strong>in</strong> eachiteration with respect to a s<strong>in</strong>gle coord<strong>in</strong>ate while the rema<strong>in</strong><strong>in</strong>g coord<strong>in</strong>ates are fixed. Theoptimization with respect to a s<strong>in</strong>gle coord<strong>in</strong>ate has an analytical solution, thus it can becomputed efficiently.10

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!