22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

torch.manual_seed(23)

dummy_points = torch.randn((100, 1))

dummy_dataset = TensorDataset(dummy_points, dummy_points)

dummy_loader = DataLoader(

dummy_dataset, batch_size=16, shuffle=True

)

If we were using a simple linear model, that would be a no-brainer, right? The

model would just keep the input as it is (multiplying it by one—the weight—and

adding zero to it—the bias). But what happens if we introduce a nonlinearity? Let’s

configure the model and train it to see what happens:

class Dummy(nn.Module):

def __init__(self):

super(Dummy, self).__init__()

self.linear = nn.Linear(1, 1)

self.activation = nn.ReLU()

def forward(self, x):

out = self.linear(x)

out = self.activation(out)

return out

torch.manual_seed(555)

dummy_model = Dummy()

dummy_loss_fn = nn.MSELoss()

dummy_optimizer = optim.SGD(dummy_model.parameters(), lr=0.1)

dummy_sbs = StepByStep(dummy_model, dummy_loss_fn, dummy_optimizer)

dummy_sbs.set_loaders(dummy_loader)

dummy_sbs.train(200)

If we compare the actual labels with the model’s predictions, we’ll see that it failed

to learn the identity function:

Residual Connections | 547

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!