22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Output

tensor(1.7188)

And what if we want to simply ignore data points with label (y=2)?

loss_fn = nn.NLLLoss(ignore_index=2)

loss_fn(dummy_log_probs, dummy_labels)

Output

tensor(1.5599)

And, once again, there is yet another loss function available for multiclass

classification. And, once again, it is very important to know when to use one or the

other, so you don’t end up with an inconsistent combination of model and loss

function.

Cross-Entropy Loss

The former loss function took log probabilities as an argument (together with the

labels, obviously). Guess what this function takes? Logits, of course! This is the

multiclass version of nn.BCEWithLogitsLoss().

"What does it mean, in practical terms?"

It means you should NOT add a logsoftmax as the last layer of your model when

using this loss function. This loss function combines both the logsoftmax layer and

the negative log-likelihood loss into one.

382 | Chapter 5: Convolutions

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!