22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

It is not that hard, to be honest. Remember the reduction argument? If we set it to

sum, our loss function will only return the numerator of the equation above. And

then we can divide it by the weighted counts ourselves:

loss_fn_imb_sum = nn.BCEWithLogitsLoss(

reduction='sum',

pos_weight=pos_weight

)

loss = loss_fn_imb_sum(dummy_imb_logits, dummy_imb_labels)

loss = loss / (pos_weight * n_pos + n_neg)

loss

Output

tensor([0.1643])

There we go!

Model Configuration

In Chapter 2.1, we ended up with a lean "Model Configuration" section: We only

need to define a model, an appropriate loss function, and an optimizer. Let’s define

a model that produces logits and use nn.BCEWithLogitsLoss() as the loss function.

Since we have two features, and we are producing logits instead of probabilities,

our model will have one layer and one layer alone: Linear(2, 1). We will keep

using the SGD optimizer with a learning rate of 0.1 for now.

Model Configuration | 231

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!