20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

320 CHAPTER 12 Improving training with metrics and augmentation

1. Guard dogs

2. Birds and

burglars

5. Balancing

POS

NEG

6.

Augmentation

3. Ratios recaLl

and precision

4. new metric:

f1 score

7. Workin’ great!

Figure 12.2

magnificent

The metaphors we’ll use to modify the metrics measuring our model to make it

at the how the resulting values change epoch by epoch during training. Finally, we’ll

make some much-needed changes to our LunaDataset implementation with an aim at

improving our training results: (5) Balancing and (6) Augmentation. Then we will see

if those experimental changes have the expected impact on our performance metrics.

By the time we’re through with this chapter, our trained model will be performing

much better: (7) Workin’ Great! While it won’t be ready to drop into clinical use just

yet, it will be capable of producing results that are clearly better than random. This

will mean we have a workable implementation of step 4, nodule candidate classification;

and once we’re finished, we can begin to think about how to incorporate steps 2

(segmentation) and 3 (grouping) into the project.

12.2 Good dogs vs. bad guys: False positives and false negatives

Instead of models and tumors, we’re going to consider the two guard dogs in figure

12.3, both fresh out of obedience school. They both want to alert us to burglars—a

rare but serious situation that requires prompt attention.

Unfortunately, while both dogs are good dogs, neither is a good guard dog. Our

terrier (Roxie) barks at just about everything, while our old hound dog (Preston)

barks almost exclusively at burglars—but only if he happens to be awake when they

arrive.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!