20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

352 CHAPTER 12 Improving training with metrics and augmentation

remember that sometimes the 'flip' augmentation will result in no flip. Returning

always-flipped images is just as limiting as not flipping in the first place. Now let’s see if

any of this makes a difference.

12.6.2 Seeing the improvement from data augmentation

We are going to train additional models, one per augmentation type discussed in the

last section, with an additional model training run that combines all of the augmentation

types. Once they’re finished, we’ll take a look at our numbers in TensorBoard.

In order to be able to turn our new augmentation types on and off, we need to

expose the construction of augmentation_dict to our command-line interface. Arguments

to our program will be added by parser.add_argument calls (not shown, but

similar to the ones our program already has), which will then be fed into code that

actually constructs augmentation_dict.

Listing 12.18

training.py:105, LunaTrainingApp.__init__

self.augmentation_dict = {}

if self.cli_args.augmented or self.cli_args.augment_flip:

self.augmentation_dict['flip'] = True

if self.cli_args.augmented or self.cli_args.augment_offset:

self.augmentation_dict['offset'] = 0.1

if self.cli_args.augmented or self.cli_args.augment_scale:

self.augmentation_dict['scale'] = 0.2

if self.cli_args.augmented or self.cli_args.augment_rotate:

self.augmentation_dict['rotate'] = True

if self.cli_args.augmented or self.cli_args.augment_noise:

self.augmentation_dict['noise'] = 25.0

These values were

empirically chosen

to have a reasonable

impact, but better

values probably

exist.

Now that we have those command-line arguments ready, you can either run the following

commands or revisit p2_run_everything.ipynb and run cells 8 through 16.

Either way you run it, expect these to take a significant time to finish:

$ .venv/bin/python -m p2ch12.prepcache

You only need to prep the

cache once per chapter.

$ .venv/bin/python -m p2ch12.training --epochs 20 \

--balanced sanity-bal

$ .venv/bin/python -m p2ch12.training --epochs 10 \

--balanced --augment-flip sanity-bal-flip

$ .venv/bin/python -m p2ch12.training --epochs 10 \

--balanced --augment-shift sanity-bal-shift

You might have this run

from earlier in the chapter;

in that case there’s no

need to rerun it!

$ .venv/bin/python -m p2ch12.training --epochs 10 \

--balanced --augment-scale sanity-bal-scale

$ .venv/bin/python -m p2ch12.training --epochs 10 \

--balanced --augment-rotate sanity-bal-rotate

$ .venv/bin/python -m p2ch12.training --epochs 10 \

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!