20.03.2021 Views

Deep-Learning-with-PyTorch

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

424 CHAPTER 14 End-to-end nodule analysis, and where to go next

Tail Backbone HEAD

Luna Model Architecture

ChaNnels: 64

Image:

2 x

3 x

3

ChaNnels: 32

Image: 4

x

6 x

6

--finetune-Depth=1

--finetune-Depth=2

ChaNnels: 16

Image: 8

x

12x

12

ChaNnels: 8

Image: 16

x

24 x

24

ChaNnels: 1

Image: 32

x

48x

48

Filter

SmaLler

Output

Input

Image

Figure 14.8

The model architecture from chapter 11, with the depth-1 and depth-2 weights highlighted

not be ideal for our problem, but we expect them to be a reasonable starting point.

This is easy: we add some loading code into our model setup.

Listing 14.9

training.py:124, .initModel

Filters out top-level modules

that have parameters (as

opposed to the final activation)

d = torch.load(self.cli_args.finetune, map_location='cpu')

model_blocks = [

n for n, subm in model.named_children() Takes the last finetune_depth blocks.

if len(list(subm.parameters())) > 0

The default (if fine-tuning) is 1.

]

finetune_blocks = model_blocks[-self.cli_args.finetune_depth:]

model.load_state_dict(

{

k: v for k,v in d['model_state'].items()

if k.split('.')[0] not in model_blocks[-1]

},

Filters out the last block (the final

linear part) and does not load it.

Starting from a fully initialized model

would have us begin with (almost) all

nodules labeled as malignant,

because that output means “nodule”

in the classifier we start from.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!