slides - SNAP - Stanford University

slides - SNAP - Stanford University

Can gradient descent recover true parameters?

How nice (smooth, (smooth without local minima) is optimization


Generate a graph from random parameters

Start at random point and use gradient descent

We recover true parameters 98% of the times

How does algorithm converge to true parameters with

gradient descent iterations?

Log‐likelihood Avg abs error 1 st eigenvalue


11/12/2009 Jure Leskovec, Stanford CS322: Network Analysis


More magazines by this user
Similar magazines