20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Interacting with the PyTorch JIT

463

Our for loop

returns the

value (y) it

calculates.

We can also print the scripted graph, which is closer to the internal representation of

TorchScript:

# In[5]:

xprint(scripted_fn.graph)

# end::cell_5_code[]

# tag::cell_5_output[]

# Out[5]:

graph(%x.1 : Tensor):

%10 : bool = prim::Constant[value=1]()

%2 : int = prim::Constant[value=0]()

%5 : int = prim::Constant[value=1]()

%y.1 : Tensor = aten::select(%x.1, %2, %2)

%7 : int = aten::size(%x.1, %2)

%9 : int = aten::__range_length(%5, %7, %5)

%y : Tensor = prim::Loop(%9, %10, %y.1)

block0(%11 : int, %y.6 : Tensor):

%i.1 : int = aten::__derive_index(%11, %5, %5)

%18 : Tensor = aten::select(%x.1, %2, %i.1)

%y.3 : Tensor = aten::add(%y.6, %18, %5)

-> (%10, %y.3)

return (%y)

In practice, you would most often use torch.jit.script in the form of a decorator:

@torch.jit.script

def myfn(x):

...

Seems a lot more

verbose than we need

The first

assignment of y

Constructing the range is

recognizable after we see

the code.

Body of the for loop:

selects a slice, and

adds to y

You could also do this with a custom trace decorator taking care of the inputs, but

this has not caught on.

Although TorchScript (the language) looks like a subset of Python, there are fundamental

differences. If we look very closely, we see that PyTorch has added type specifications

to the code. This hints at an important difference: TorchScript is statically

typed—every value (variable) in the program has one and only one type. Also, the

types are limited to those for which the TorchScript IR has a representation. Within

the program, the JIT will usually infer the type automatically, but we need to annotate

any non-tensor arguments of scripted functions with their types. This is in stark contrast

to Python, where we can assign anything to any variable.

So far, we’ve traced functions to get scripted functions. But we graduated from just

using functions in chapter 5 to using modules a long time ago. Sure enough, we can

also trace or script models. These will then behave roughly like the modules we know

and love. For both tracing and scripting, we pass an instance of Module to

torch.jit.trace (with sample inputs) or torch.jit.script (without sample

inputs), respectively. This will give us the forward method we are used to. If we want

to expose other methods (this only works in scripting) to be called from the outside,

we decorate them with @torch.jit.export in the class definition.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!