22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The chosen activation function is the rectified linear unit (ReLU), one of the most

commonly used functions.

We kept the bias out of the picture for the sake of comparing this model to the

previous one, which is completely identical except for the activation functions

introduced after each hidden layer.

In real problems, as a general rule, you should keep bias=True.

Model Training

Let’s train our new, deep, and activated model for 50 epochs using the StepByStep

class and visualize the losses:

Model Training

1 n_epochs = 50

2

3 sbs_relu = StepByStep(model_relu, binary_loss_fn, optimizer_relu)

4 sbs_relu.set_loaders(train_loader, val_loader)

5 sbs_relu.train(n_epochs)

fig = sbs_relu.plot_losses()

Figure 4.17 - Losses

This is more like it! But, to really grasp the difference made by the activation

functions, let’s plot all models on the same chart.

322 | Chapter 4: Classifying Images

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!