20.03.2021 Views

Deep-Learning-with-PyTorch

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Deploying to production

This chapter covers

• Options for deploying PyTorch models

• Working with the PyTorch JIT

• Deploying a model server and exporting models

• Running exported and natively implemented

models from C++

• Running models on mobile

In part 1 of this book, we learned a lot about models; and part 2 left us with a

detailed path for creating good models for a particular problem. Now that we have

these great models, we need to take them where they can be useful. Maintaining

infrastructure for executing inference of deep learning models at scale can be

impactful from an architectural as well as cost standpoint. While PyTorch started

off as a framework focused on research, beginning with the 1.0 release, a set of

production-oriented features were added that today make PyTorch an ideal end-toend

platform from research to large-scale production.

445

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!