20.03.2021 Views

Deep-Learning-with-PyTorch

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

470 CHAPTER 15 Deploying to production

construct submodules in Python in the __init__ constructor. Unlike Python, C++

does not have the introspection and hooking capabilities that enable redirection of

__setattr__ to combine assignment to a member and registration.

Since the lack of keyword arguments makes the parameter specification awkward

with default arguments, modules (like tensor factory functions) typically take an

options argument. Optional keyword arguments in Python correspond to methods of

the options object that we can chain. For example, the Python module

nn.Conv2d(in_channels, out_channels, kernel_size, stride=2, padding=1) that

we need to convert translates to torch::nn::Conv2d(torch::nn::Conv2dOptions

(in_channels, out_channels, kernel_size).stride(2).padding(1)). This is a bit

more tedious, but you’re reading this because you love C++ and aren’t deterred by the

hoops it makes you jump through.

We should always take care that registration and assignment to members is in sync,

or things will not work as expected: for example, loading and updating parameters

during training will happen to the registered module, but the actual module being

called is a member. This synchronization was done behind the scenes by the Python

nn.Module class, but it is not automatic in C++. Failing to do so will cause us many

headaches.

In contrast to what we did (and should!) in Python, we need to call m->forward(…)

for our modules. Some modules can also be called directly, but for Sequential, this is

not currently the case.

A final comment on calling conventions is in order: depending on whether you

modify tensors provided to functions, 14 tensor arguments should always be passed as

const Tensor& for tensors that are left unchanged or Tensor if they are changed. Tensors

should be returned as Tensor. Wrong argument types like non-const references

(Tensor&) will lead to unparsable compiler errors.

In the main generator class, we’ll follow a typical pattern in the C++ API more

closely by naming our class ResNetGeneratorImpl and promoting it to a torch module

ResNetGenerator using the TORCH_MODULE macro. The background is that we want to

mostly handle modules as references or shared pointers. The wrapped class

accomplishes this.

Listing 15.15

ResNetGenerator in cyclegan_cpp_api.cpp

Adds modules to the Sequential container in the

constructor. This allows us to add a variable

struct ResNetGeneratorImpl : torch::nn::Module { number of modules in a for loop.

torch::nn::Sequential model;

ResNetGeneratorImpl(int64_t input_nc = 3, int64_t output_nc = 3,

int64_t ngf = 64, int64_t n_blocks = 9) {

TORCH_CHECK(n_blocks >= 0);

model->push_back(torch::nn::ReflectionPad2d(3));

14 This is a bit blurry because you can create a new tensor sharing memory with an input and modify it in place,

but it’s best to avoid that if possible.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!