slides - SNAP - Stanford University

snap.stanford.edu

slides - SNAP - Stanford University

Can gradient descent recover true parameters?

How nice (smooth, (smooth without local minima) is optimization

space?

Generate a graph from random parameters

Start at random point and use gradient descent

We recover true parameters 98% of the times

How does algorithm converge to true parameters with

gradient descent iterations?

Log‐likelihood Avg abs error 1 st eigenvalue

Diameter

11/12/2009 Jure Leskovec, Stanford CS322: Network Analysis

26

More magazines by this user
Similar magazines