22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The first summation adds up the errors corresponding to the points in the positive

class. The second summation adds up the errors corresponding to the points in the

negative class. I believe the formula above is quite straightforward and easy to

understand. Unfortunately, it is usually skipped over, and only its equivalent is

presented:

Equation 3.15 - Binary Cross-Entropy formula, the clever way

The formula above is a clever way of computing the loss in a single expression, sure,

but the split of positive and negative points is less obvious. If you pause for a

minute, you’ll realize that points in the positive class (y=1) have their second term

equal zero, while points in the negative class (y=0) have their first term equal zero.

Let’s see how it looks in code:

summation = torch.sum(

dummy_labels * torch.log(dummy_predictions) +

(1 - dummy_labels) * torch.log(1 - dummy_predictions)

)

loss = -summation / n_total

loss

Output

tensor(0.1643)

Of course, we got the same loss (0.1643) as before.

For a very detailed explanation of the rationale behind this loss function, make sure

to check my post: "Understanding binary cross-entropy / log loss: a visual

explanation." [70]

BCELoss

Sure enough, PyTorch implements the binary cross-entropy loss, nn.BCELoss(). Just

like its regression counterpart, nn.MSELoss(), introduced in Chapter 1, it is a higherorder

function that returns the actual loss function.

Loss | 223

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!