20.03.2021 Views

Deep-Learning-with-PyTorch

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Graphing the positives and negatives

327

High bark Threshold, high Precision

Preston Mostly SlEeps

Figure 12.9 Preston’s choice of threshold prioritizes minimizing false

positives. Cats get left alone; only burglars are barked at!

12.3.3 Implementing precision and recall in logMetrics

Both precision and recall are valuable metrics to be able to track during training,

since they provide important insight into how the model is behaving. If either of them

drops to zero (as we saw in chapter 11!), it’s likely that our model has started to

behave in a degenerate manner. We can use the exact details of the behavior to guide

where to investigate and experiment with getting training back on track. We’d like to

update the logMetrics function to add precision and recall to the output we see for

each epoch, to complement the loss and correctness metrics we already have.

We’ve been defining precision and recall in terms of “true positives” and the like

thus far, so we will continue to do so in the code. It turns out that we are already computing

some of the values we need, though we had named them differently.

Listing 12.1

training.py:315, LunaTrainingApp.logMetrics

neg_count = int(negLabel_mask.sum())

pos_count = int(posLabel_mask.sum())

trueNeg_count = neg_correct = int((negLabel_mask & negPred_mask).sum())

truePos_count = pos_correct = int((posLabel_mask & posPred_mask).sum())

falsePos_count = neg_count - neg_correct

falseNeg_count = pos_count - pos_correct

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!