20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Predicting malignancy

429

)

metrics_t[METRICS_PRED_P_NDX, posLabel_mask],

self.totalTrainingSamples_count,

bins=bins

Now we can take a look at our prediction distribution for benign samples and how it

evolves over each epoch. We want to examine two main features of the histograms in

figure 14.13. As we would expect if our network is learning anything, in the top row of

benign samples and non-nodules, there is a mountain on the left where the network is

very confident that what it sees is not malignant. Similarly, there is a mountain on the

right for the malignant samples.

But looking closer, we see the capacity problem of fine-tuning only one layer.

Focusing on the top-left series of histograms, we see the mass to the left is somewhat

spread out and does not seem to reduce much. There is even a small peak round 1.0,

and quite a bit of probability mass is spread out across the entire range. This reflects

the loss that didn’t want to decrease below 0.3.

Figure 14.13

TensorBoard histogram display for fine-tuning the head only

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!