22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

more information on the differences between CPUs and GPUs, please refer to this

link [44] .

If you have a graphics card from NVIDIA, you can use the power of its GPU to

speed up model training. PyTorch supports the use of these GPUs for model

training using CUDA (Compute Unified Device Architecture), which needs to be

previously installed and configured (please refer to the "Setup Guide" for more

information on this).

If you do have a GPU (and you managed to install CUDA), we’re getting to the part

where you get to use it with PyTorch. But, even if you do not have a GPU, you

should stick around in this section anyway. Why? First, you can use a free GPU

from Google Colab, and, second, you should always make your code GPU-ready;

that is, it should automatically run in a GPU, if one is available.

"How do I know if a GPU is available?"

PyTorch has your back once more—you can use cuda.is_available() to find out if

you have a GPU at your disposal and set your device accordingly. So, it is good

practice to figure this out at the top of your code:

Defining Your Device

device = 'cuda' if torch.cuda.is_available() else 'cpu'

So, if you don’t have a GPU, your device is called cpu. If you do have a GPU, your

device is called cuda or cuda:0. Why isn’t it called gpu, then? Don’t ask me… The

important thing is, your code will be able to always use the appropriate device.

"Why cuda:0? Are there others, like cuda:1, cuda:2 and so on?"

There may be if you are lucky enough to have multiple GPUs in your computer. Since

this is usually not the case, I am assuming you have either one GPU or none. So,

when we tell PyTorch to send a tensor to cuda without any numbering, it will send it

to the current CUDA device, which is device #0 by default.

If you are using someone else’s computer and you don’t know how many GPUs it

has, or which model they are, you can figure it out using cuda.device_count() and

cuda.get_device_name():

78 | Chapter 1: A Simple Regression Problem

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!