22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Notebook Cell 1.3 - Loading data: turning Numpy arrays into PyTorch tensors

1 device = 'cuda' if torch.cuda.is_available() else 'cpu'

2

3 # Our data was in Numpy arrays, but we need to transform them

4 # into PyTorch tensors and then send them to the

5 # chosen device

6 x_train_tensor = torch.as_tensor(x_train).float().to(device)

7 y_train_tensor = torch.as_tensor(y_train).float().to(device)

So, we defined a device, converted both Numpy arrays into PyTorch tensors, cast

them to floats, and sent them to the device. Let’s take a look at the types:

# Here we can see the difference - notice that .type() is more

# useful since it also tells us WHERE the tensor is (device)

print(type(x_train), type(x_train_tensor), x_train_tensor.type())

Output - GPU

<class 'numpy.ndarray'> <class 'torch.Tensor'>

torch.cuda.FloatTensor

Output - CPU

<class 'numpy.ndarray'> <class 'torch.Tensor'>

torch.FloatTensor

If you compare the types of both variables, you’ll get what you’d expect:

numpy.ndarray for the first one and torch.Tensor for the second one.

But where does the x_train_tensor "live"? Is it a CPU or a GPU tensor? You can’t

say, but if you use PyTorch’s type(), it will reveal its location

—torch.cuda.FloatTensor—a GPU tensor in this case (assuming the output using a

GPU, of course).

There is one more thing to be aware of when using GPU tensors. Remember

numpy()? What if we want to turn a GPU tensor back into a Numpy array? We’ll get

an error:

80 | Chapter 1: A Simple Regression Problem

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!