25.02.2015 Views

Approximation of Hessian Matrix for Second-order SPSA Algorithm ...

Approximation of Hessian Matrix for Second-order SPSA Algorithm ...

Approximation of Hessian Matrix for Second-order SPSA Algorithm ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.3 PROPOSED MAPPING<br />

λ and 0<br />

where > 0<br />

q<br />

λ<br />

q + 1<br />

≤ . As k<br />

H is a real-valued, its eigenvalues are real-valued,<br />

too. The eigenvalues <strong>of</strong><br />

H are computed as follows:<br />

k<br />

The number <strong>of</strong> non-zero eigenvalues is equal to the rank <strong>of</strong> H<br />

k<br />

, i.e., at most three non-zero<br />

eigenvalues are available. In this part, the following arrangement <strong>of</strong> eigenvalues is assumed:<br />

λ<br />

≥<br />

≥<br />

1<br />

λ<br />

2<br />

λ<br />

3<br />

. The technique presented here, requires much less user interaction. Now, the<br />

theoretical background is explained leading to a two-fold threshold algorithm where the only<br />

task <strong>of</strong> the user is to specify two thresholds. Finding the eigenvalues and eigenvectors <strong>of</strong> the<br />

<strong>Hessian</strong> matrix is closely related to its decomposition<br />

H<br />

i<br />

= PD P<br />

−1<br />

(2.6)<br />

where P is a matrix and its columns are H’s eigenvectors and<br />

D<br />

i<br />

is a diagonal matrix having<br />

H’s eigenvalues on the <strong>Hessian</strong>. While computing the gradient magnitude by the Euclidean<br />

norm requires three multiplications, two additions and one square root, the computation <strong>of</strong><br />

eigenvalues <strong>of</strong> the <strong>Hessian</strong> matrix is more suitable. The explicit <strong>for</strong>mula would require solving<br />

cubic polynomials. In our implementation a numerical technique <strong>of</strong> fast converging called<br />

Jacobi’s method is used as is recommended in [20] <strong>for</strong> symmetric matrices. We have proposed<br />

an easy-to-use framework <strong>for</strong> exploiting eigenvalues <strong>of</strong> the <strong>Hessian</strong> matrix to represent volume<br />

data by small subsets.<br />

The relation <strong>of</strong> eigenvalues to the Laplacian operator is recalled, this shows the suitability <strong>of</strong><br />

threshold eigenvalue volumes, and define a two-fold threshold operation to generate sparse data<br />

sets. For data where it can be assumed that objects exhibit higher intensities than background,<br />

we modify the framework taking into account only the smallest eigenvalue. This results in a<br />

further reduction <strong>of</strong> the representative subsets by selecting just data at the interior side <strong>of</strong> object<br />

boundaries. For the sake <strong>of</strong> simplicity, we have omitted the index k <strong>for</strong> the individual eigenvalue<br />

λ<br />

i that is a function <strong>of</strong> k. Next, we assume that the negative eigenvalues will not lead to a<br />

physically meaningful solution. They are either caused by errors in<br />

H<br />

k<br />

or are due to the fact<br />

that the iteration has not reached the neighborhood <strong>of</strong><br />

θ<br />

*<br />

where the loss function is locally<br />

quadratic. There<strong>for</strong>e, we replace them together with the smallest positive eigenvalue with a<br />

descending series <strong>of</strong> positive eigenvalues:<br />

23

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!