20.02.2019 Views

CLC-Conference-Proceeding-2018

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Nonnegative matrix factorizations: Ideas and applications<br />

Marta Lourdes Baguer Díaz-Romañach<br />

This article is not intended to conduct an<br />

exhaustive study of the state of the art on the<br />

Non-Negative Matrix Factorizations (NMF) nor<br />

to present all the possible extensions of its<br />

general model or ways of solution, it is only<br />

intended to motivate the reader to pay attention<br />

to a versatile tool for scientific computing that<br />

can be adapted to solve a wide range of problems<br />

and that is still in development.<br />

When one talk about the NMF, some<br />

notes of Gene Golub take us back to the 70's i .<br />

These factorizations shall be understood, for<br />

many reasons, as a "philosophy" to address the<br />

solution of different problems with nonnegative<br />

data, taking into account the relationship<br />

between the parts and the whole as seen by Lee<br />

and Seung, ii more than a factorization in the<br />

mathematical sense like, for example, the LU, the<br />

QR or the SVD . This article will show some<br />

aspects that are important to begin to get into the<br />

study of NMF.<br />

When the Linear Numerical Algebra is<br />

explained, it begins by presenting the LU<br />

factorization. This factorization looks for two<br />

regular matrices, L, lower triangular, and U,<br />

upper triangular, so that the system of linear<br />

equations Ax = b, becomes easier to solve. In<br />

this factorization the columns of the matrix A can<br />

be expressed as a linear combination of the<br />

columns of L.<br />

That is, if we consider A = [a1, a2, … ,<br />

an], L = [l1, l2,… , ln] and U = [u1, u2,… , un],<br />

then, each column of the matrix A, can be written<br />

as ai = u1il1 + u2il2 + ⋯ + uniln. In this way we<br />

have obtained a basis L of the subspace<br />

generated by the columns of the matrix A<br />

without other requirements than the linear<br />

independence of the columns. For ill<br />

conditioned, rank deficient or simply rectangular<br />

matrices, it is important to demand the<br />

orthogonality of the basis. This is closely related<br />

to the control of the propagation of the error. In<br />

those cases, a QR factorization or some variant,<br />

as the RRQR (rank revealing QR) can be<br />

computed. iii<br />

The Singular Value Decomposition<br />

(SVD) is not simply a matrix factorization. The<br />

SVD helps us to understand the main properties<br />

of the matrix and finds application in<br />

innumerable areas. Only the fact of being able to<br />

handle in floating point arithmetic the concept of<br />

the numerical rank of a matrix is an important<br />

achievement. When obtaining a decomposition<br />

of the matrix in singular values, orthonormal<br />

basis are obtained both for the subspace<br />

generated by the columns and for the subspace<br />

generated by the rows of the matrix A ∈ R mxn iv .<br />

A classic formulation is when we express<br />

A = UΣV T , where the orthogonal matrices U ∈<br />

R mxm and V ∈ R nxn contain the corresponding<br />

basis and Σ ∈ R mxn , is a diagonal matrix and<br />

contains the singular values σi, with σ1 ≥ σ2 ≥ ⋯<br />

≥ σr > 0, and r is the rank of the matrix. An<br />

important issue is that, it is not necessary to store<br />

the complete U and V matrices but only the r<br />

columns of the matrices U and V.<br />

The matrix A can then be decomposed<br />

into A = UrΣrVr T = Ar =<br />

, a weighted<br />

sum of matrices of rank 1, uivi T . It can be shown<br />

that one of the main properties of the SVD is that<br />

by making use of this expansion, we obtain the<br />

best approximation of the rank k of A in

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!