20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

462 CHAPTER 15 Deploying to production

We are not here for theory, so let’s try tracing and scripting with a very simple function

that adds inefficiently over the first dimension:

# In[2]:

def myfn(x):

y = x[0]

for i in range(1, x.size(0)):

y = y + x[i]

return y

We can trace it:

# In[3]:

inp = torch.randn(5,5)

traced_fn = torch.jit.trace(myfn, inp)

print(traced_fn.code)

# Out[3]:

def myfn(x: Tensor) -> Tensor:

y = torch.select(x, 0, 0)

Indexing in the first

line of our function

y0 = torch.add(y, torch.select(x, 0, 1), alpha=1)

y1 = torch.add(y0, torch.select(x, 0, 2), alpha=1)

y2 = torch.add(y1, torch.select(x, 0, 3), alpha=1)

_0 = torch.add(y2, torch.select(x, 0, 4), alpha=1)

return _0

Our loop—but completely

unrolled and fixed to 1…4

regardless of the size of x

Scary, but so true!

TracerWarning: Converting a tensor to a Python index might cause the trace

to be incorrect. We can't record the data flow of Python values, so this

value will be treated as a constant in the future. This means the

trace might not generalize to other inputs!

We see the big warning—and indeed, the code has fixed indexing and additions for

five rows, and it would not deal as intended with four or six rows.

This is where scripting helps:

# In[4]:

scripted_fn = torch.jit.script(myfn)

print(scripted_fn.code)

# Out[4]:

def myfn(x: Tensor) -> Tensor:

y = torch.select(x, 0, 0)

_0 = torch.__range_length(1, torch.size(x, 0), 1)

y0 = y

for _1 in range(_0):

i = torch.__derive_index(_1, 1, 1)

y0 = torch.add(y0, torch.select(x, 0, i), alpha=1)

return y0

PyTorch constructs the

range length from the

tensor size.

Our for loop—even if we have to take the

funny-looking next line to get our index i

Our loop body, which is

just a tad more verbose

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!