20.03.2021 Views

Deep-Learning-with-PyTorch

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Training and validating the model

299

By recording the label, prediction, and loss for each and every training (and later, validation)

sample, we have a wealth of detailed information we can use to investigate

the behavior of our model. For now, we’re going to focus on compiling per-class statistics,

but we could easily use this information to find the sample that is classified the

most wrongly and start to investigate why. Again, for some projects, this kind of information

will be less interesting, but it’s good to remember that you have these kinds of

options available.

11.5.2 The validation loop is similar

The validation loop in figure 11.8 looks very similar to training but is somewhat simplified.

The key difference is that validation is read-only. Specifically, the loss value

returned is not used, and the weights are not updated.

Init model

Initialized

with Random

weights

FuLly

Trained

Init data loaders

LOop over epochs

Training LOop

Load batch tuple

ClaSsify Batch

Calculate LoSs

Record metrics

Update weights

Validation LOop

Load

batch tuple

ClaSsify Batch

Calculate LoSs

Record metrics

Log Metrics

console

tensorboard

Figure 11.8 The training and validation script we will implement in this chapter, with a

focus on the per-epoch validation loop

Nothing about the model should have changed between the start and end of the function

call. In addition, it’s quite a bit faster due to the with torch.no_grad() context

manager explicitly informing PyTorch that no gradients need to be computed.

Listing 11.13

training.py:137, LunaTrainingApp.main

def main(self):

for epoch_ndx in range(1, self.cli_args.epochs + 1):

# ... line 157

valMetrics_t = self.doValidation(epoch_ndx, val_dl)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!