26.12.2022 Views

TheoryofDeepLearning.2022

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

68 theory of deep learning

proof is also robust if we don’t have access to the exact objective -

in settings where only a subset of coordinates of zz ⊤10 , one can still

prove that the objective function is locally optimizable, and hence

find z by nonconvex optimization.

Lemma 7.4.4 and Lemma 7.4.3 both use directions x and z. It is

also possible to use the direction x − z when 〈x, z〉 ≥ 0 (and x + z

when 〈x, z〉 < 0). Both ideas can be generalized to handle the case

when M = ZZ ⊤ where Z ∈ R d×r , so M is a rank-r matrix.

10

This setting is known as matrix

completion and has been widely applied

to recommendation systems.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!