20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

58 CHAPTER 3 It starts with a tensor

The bottom line is that the subtensor has one less dimension, as we would expect,

while still indexing the same storage as the original points tensor. This also means

changing the subtensor will have a side effect on the original tensor:

# In[28]:

points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]])

second_point = points[1]

second_point[0] = 10.0

points

# Out[28]:

tensor([[ 4., 1.],

[10., 3.],

[ 2., 1.]])

This might not always be desirable, so we can eventually clone the subtensor into a

new tensor:

# In[29]:

points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]])

second_point = points[1].clone()

second_point[0] = 10.0

points

# Out[29]:

tensor([[4., 1.],

[5., 3.],

[2., 1.]])

3.8.2 Transposing without copying

Let’s try transposing now. Let’s take our points tensor, which has individual points in

the rows and X and Y coordinates in the columns, and turn it around so that individual

points are in the columns. We take this opportunity to introduce the t function, a

shorthand alternative to transpose for two-dimensional tensors:

# In[30]:

points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]])

points

# Out[30]:

tensor([[4., 1.],

[5., 3.],

[2., 1.]])

# In[31]:

points_t = points.t()

points_t

# Out[31]:

tensor([[4., 5., 2.],

[1., 3., 1.]])

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!