20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

62 CHAPTER 3 It starts with a tensor

# In[46]:

points_t_cont.storage()

# Out[46]:

4.0

5.0

2.0

1.0

3.0

1.0

[torch.FloatStorage of size 6]

Notice that the storage has been reshuffled in order for elements to be laid out rowby-row

in the new storage. The stride has been changed to reflect the new layout.

As a refresher, figure 3.7 shows our diagram again. Hopefully it will all make sense

now that we’ve taken a good look at how tensors are built.

5 7 4

1 3 2

7 3 8

OFfSET = 1

SHAPE = (3, 3)

ROWS

(first INDEX)

STRIDE = (3, 1)

COLS

(second INDEX)

+1 NEXT COL (STRIDE[1]=1)

6 5 7 4 1 3 2 7 3 8

+3 NEXT ROW (STRIDE[0]=3)

Figure 3.7 Relationship between a tensor’s offset, size, and stride. Here the tensor is a view

of a larger storage, like one that might have been allocated when creating a larger tensor.

3.9 Moving tensors to the GPU

So far in this chapter, when we’ve talked about storage, we’ve meant memory on the

CPU. PyTorch tensors also can be stored on a different kind of processor: a graphics

processing unit (GPU). Every PyTorch tensor can be transferred to (one of) the

GPU(s) in order to perform massively parallel, fast computations. All operations that

will be performed on the tensor will be carried out using GPU-specific routines that

come with PyTorch.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!