20.03.2021 Views

Deep-Learning-with-PyTorch

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

386 CHAPTER 13 Using segmentation to find suspected nodules

Now that we know what we need to do with transform_t to get our data out, let’s take

a look at the _build2dTransformMatrix function that actually creates the transformation

matrix we use.

Listing 13.21

model.py:90, ._build2dTransformMatrix

def _build2dTransformMatrix(self):

transform_t = torch.eye(3)

for i in range(2):

if self.flip:

if random.random() > 0.5:

transform_t[i,i] *= -1

# ... line 108

if self.rotate:

angle_rad = random.random() * math.pi * 2

s = math.sin(angle_rad)

c = math.cos(angle_rad)

Creates a 3 × 3 matrix, but we

will drop the last row later.

Again, we’re augmenting

2D data here.

Takes a random angle in radians,

so in the range 0 .. 2{pi}

rotation_t = torch.tensor([

[c, -s, 0],

[s, c, 0],

[0, 0, 1]])

transform_t @= rotation_t

return transform_t

Rotation matrix for the 2D rotation by the

random angle in the first two dimensions

Applies the rotation to the transformation matrix

using the Python matrix multiplication operator

Other than the slight differences to deal with 2D data, our GPU augmentation code

looks very similar to our CPU augmentation code. That’s great, because it means

we’re able to write code that doesn’t have to care very much about where it runs. The

primary difference isn’t in the core implementation: it’s how we wrapped that implementation

into a nn.Module subclass. While we’ve been thinking about models as

exclusively a deep learning tool, this shows us that with PyTorch, tensors can be used

quite a bit more generally. Keep this in mind when you start your next project—the

range of things you can accomplish with a GPU-accelerated tensor is pretty large!

13.6 Updating the training script for segmentation

We have a model. We have data. We need to use them, and you won’t be surprised

when step 2C of figure 13.14 suggests we should train our new model with the new

data.

To be more precise about the process of training our model, we will update three

things affecting the outcome from the training code we got in chapter 12:

• We need to instantiate the new model (unsurprisingly).

• We will introduce a new loss: the Dice loss.

• We will also look at an optimizer other than the venerable SGD we’ve used so

far. We’ll stick with a popular one and use Adam.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!