20.03.2021 Views

Deep-Learning-with-PyTorch

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

330 CHAPTER 12 Improving training with metrics and augmentation

recaLl

avg(p, r)

f1(p, r)

80

60

40

20

0 0 20 40 60 80

80

60

40

20

recaLl

0 0 20 40 60 80

precision

precision

Figure 12.11 Computing the final score with avg(p, r). Lighter values are closer to 1.0.

recaLl

Contrast that with the F1 score: when recall is high but precision is low, trading off a

lot of recall for even a little precision will move the score closer to that balanced sweet

spot. There’s a nice, deep elbow that is easy to slide into. That encouragement to have

balanced precision and recall is what we want from our grading metric.

Let’s say we still want a simpler metric, but one that doesn’t reward skew at all. In

order to correct for the weakness of addition, we might take the minimum of precision

and recall (figure 12.12).

min(p, r)

f1(p, r)

80

60

40

20

0 0 20 40 60 80

80

60

40

20

recaLl

0 0 20 40 60 80

precision

precision

Figure 12.12 Computing the final score with min(p, r)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!