22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Evaluation

How can we evaluate the model? We can compute the validation loss; that is, how

wrong the model’s predictions are for unseen data.

First, we need to use the model to compute predictions and then use the loss

function to compute the loss, given our predictions and the true labels. Sounds

familiar? These are pretty much the first two steps of the training step function we

built as Helper Function #1.

So, we can use that code as a starting point, getting rid of steps 3 and 4, and, most

important, we need to use the model’s eval() method. The only thing it does is set

the model to evaluation mode (just like its train() counterpart did), so the model

can adjust its behavior accordingly when it has to perform some operations, like

Dropout.

"Why is setting the mode so important?"

As mentioned above, dropout (a regularization technique commonly used for

reducing overfitting) is the main reason for it, since it requires the model to behave

differently during training and evaluation. In a nutshell, dropout randomly sets

some weights to zero during training.

We’ll get back to dropout in the second volume of the series.

What would happen if this behavior persisted outside of training time? You would

end up with possibly different predictions for the same input since different

weights would be set to zero every time you made a prediction. It would ruin

evaluation and, if deployed, would also ruin the confidence of the user.

We don’t want that, so we use model.eval() to prevent it!

Evaluation | 147

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!