22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Run - Model Training V2

%run -i model_training/v2.py

"Wow! What happened here?!"

It seems like a lot changed. Let’s take a closer look, step by step:

• We added an inner loop to handle the mini-batches produced by the

DataLoader (line 12).

• We sent only one mini-batch to the device, as opposed to sending the whole

training set (lines 16 and 17).

For larger datasets, loading data on demand (into a CPU tensor)

inside Dataset’s __getitem__() method and then sending all data

points that belong to the same mini-batch at once to your GPU

(device) is the way to go to make the best use of your graphics

card’s RAM.

Moreover, if you have many GPUs to train your model on, it is

best to keep your dataset “device agnostic" and assign the

batches to different GPUs during training.

• We performed a train_step_fn() on a mini-batch (line 21) and appended the

corresponding loss to a list (line 22).

• After going through all mini-batches, that is, at the end of an epoch, we

calculated the total loss for the epoch, which is the average loss over all minibatches,

appending the result to a list (lines 26 and 28).

After another two updates, our current state of development is:

• Data Preparation V1

• Model Configuration V1

• Model Training V2

DataLoader | 141

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!