22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

In the next chapter, we’ll be using the full output, that is, the full

sequence of hidden states, for encoder-decoder models.

Next, we create an instance of the model, the corresponding loss function for a

binary classification problem, and an optimizer:

Model Configuration

1 torch.manual_seed(21)

2 model = SquareModel(n_features=2, hidden_dim=2, n_outputs=1)

3 loss = nn.BCEWithLogitsLoss()

4 optimizer = optim.Adam(model.parameters(), lr=0.01)

Model Training

Then, we train our SquareModel over 100 epochs, as usual, visualize the losses, and

evaluate its accuracy on the test data:

Model Training

1 sbs_rnn = StepByStep(model, loss, optimizer)

2 sbs_rnn.set_loaders(train_loader, test_loader)

3 sbs_rnn.train(100)

fig = sbs_rnn.plot_losses()

Figure 8.12 - Losses—SquareModel

618 | Chapter 8: Sequences

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!