22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

14 for epoch in range(n_epochs):

15 # Step 1 - Computes model's predicted output - forward pass

16 yhat = b + w * x_train_tensor

17

18 # Step 2 - Computes the loss

19 # We are using ALL data points, so this is BATCH gradient

20 # descent. How wrong is our model? That's the error!

21 error = (yhat - y_train_tensor)

22 # It is a regression, so it computes mean squared error (MSE)

23 loss = (error ** 2).mean()

24

25 # Step 3 - Computes gradients for both "b" and "w"

26 # parameters. No more manual computation of gradients!

27 # b_grad = 2 * error.mean()

28 # w_grad = 2 * (x_tensor * error).mean()

29 # We just tell PyTorch to work its way BACKWARDS

30 # from the specified loss!

31 loss.backward()

32

33 # Step 4 - Updates parameters using gradients and

34 # the learning rate. But not so fast...

35 # FIRST ATTEMPT - just using the same code as before

36 # AttributeError: 'NoneType' object has no attribute 'zero_'

37 # b = b - lr * b.grad 1

38 # w = w - lr * w.grad 1

39 # print(b) 1

40

41 # SECOND ATTEMPT - using in-place Python assignment

42 # RuntimeError: a leaf Variable that requires grad

43 # has been used in an in-place operation.

44 # b -= lr * b.grad 2

45 # w -= lr * w.grad 2

46

47 # THIRD ATTEMPT - NO_GRAD for the win!

48 # We need to use NO_GRAD to keep the update out of

49 # the gradient computation. Why is that? It boils

50 # down to the DYNAMIC GRAPH that PyTorch uses...

51 with torch.no_grad(): 3

52 b -= lr * b.grad 3

53 w -= lr * w.grad 3

54

55 # PyTorch is "clingy" to its computed gradients; we

90 | Chapter 1: A Simple Regression Problem

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!