22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

and variable-length sequences.

Model Configuration

1 torch.manual_seed(21)

2 model = SquareModelOne(n_features=2, hidden_dim=2, n_outputs=1,

3 rnn_layer=nn.LSTM, num_layers=1,

4 bidirectional=True)

5 loss = nn.BCEWithLogitsLoss()

6 optimizer = optim.Adam(model.parameters(), lr=0.01)

Model Training

1 sbs_one = StepByStep(model, loss, optimizer)

2 #sbs_one.set_loaders(train_loader)

3 sbs_one.set_loaders(train_var_loader)

4 sbs_one.train(100)

#StepByStep.loader_apply(train_loader, sbs_one.correct)

StepByStep.loader_apply(train_var_loader, sbs_one.correct)

Output

tensor([[66, 66],

[62, 62]])

Recap

In this chapter, we’ve learned about sequential data and how to use recurrent

neural networks to perform a classification task. We followed the journey of a

hidden state through all the transformations happening inside of different

recurrent layers: RNN, GRU, and LSTM. We learned the difference between

padding and packing variable-length sequences, and how to build a data loader for

packed sequences. We also brought back convolutions, using the one-dimensional

version to process sequential data as well. This is what we’ve covered:

• understanding the importance of order in sequential data

• generating a synthetic two-dimensional dataset so we can visualize what’s

Recap | 683

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!