20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

392 CHAPTER 13 Using segmentation to find suspected nodules

COLLECTING METRICS

Since we’re going to purposefully skew our numbers for better recall, let’s see just how

tilted things will be. In our classification computeBatchLoss, we compute various persample

values that we used for metrics and the like. We also compute similar values for

the overall segmentation results. These true positive and other metrics were previously

computed in logMetrics, but due to the size of the result data (recall that each

single CT slice from the validation set is a quarter-million pixels!), we need to compute

these summary stats live in the computeBatchLoss function.

Listing 13.26

training.py:297, .computeBatchLoss

start_ndx = batch_ndx * batch_size

end_ndx = start_ndx + input_t.size(0)

We threshold the

prediction to get “hard”

Dice but convert to float for

the later multiplication.

with torch.no_grad():

predictionBool_g = (prediction_g[:, 0:1]

> classificationThreshold).to(torch.float32)

tp = ( predictionBool_g * label_g).sum(dim=[1,2,3])

fn = ((1 - predictionBool_g) * label_g).sum(dim=[1,2,3])

fp = ( predictionBool_g * (~label_g)).sum(dim=[1,2,3])

Computing true

positives, false

positives, and false

negatives is similar

to what we did

when computing

the Dice loss.

metrics_g[METRICS_LOSS_NDX, start_ndx:end_ndx] = diceLoss_g

metrics_g[METRICS_TP_NDX, start_ndx:end_ndx] = tp

We store our metrics to a large

metrics_g[METRICS_FN_NDX, start_ndx:end_ndx] = fn

tensor for future reference. This

metrics_g[METRICS_FP_NDX, start_ndx:end_ndx] = fp is per batch item rather than

averaged over the batch.

As we discussed at the beginning of this section, we can compute our true positives and

so on by multiplying our prediction (or its negation) and our label (or its negation)

together. Since we’re not as worried about the exact values of our predictions here (it

doesn’t really matter if we flag a pixel as 0.6 or 0.9—as long as it’s over the threshold,

we’ll call it part of a nodule candidate), we are going to create predictionBool_g by

comparing it to our threshold of 0.5.

13.6.4 Getting images into TensorBoard

One of the nice things about working on segmentation tasks is that the output is easily

represented visually. Being able to eyeball our results can be a huge help for determining

whether a model is progressing well (but perhaps needs more training), or if it has

gone off the rails (so we need to stop wasting our time with further training). There

are many ways we could package up our results as images, and many ways we could display

them. TensorBoard has great support for this kind of data, and we already have

TensorBoard SummaryWriter instances integrated with our training runs, so we’re

going to use TensorBoard. Let’s see what it takes to get everything hooked up.

We’ll add a logImages function to our main application class and call it with both

our training and validation data loaders. While we are at it, we will make another

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!