22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Data Preparation

1 def pack_collate(batch):

2 X = [item[0] for item in batch]

3 y = [item[1] for item in batch]

4 X_pack = rnn_utils.pack_sequence(X, enforce_sorted=False)

5

6 return X_pack, torch.as_tensor(y).view(-1, 1)

7

8 train_var_loader = DataLoader(train_var_data,

9 batch_size=16,

10 shuffle=True,

11 collate_fn=pack_collate)

There Can Be Only ONE … Model

We’ve developed many models throughout this chapter, depending both on the

type of recurrent layer that was used (RNN, GRU, or LSTM) and on the type of

sequence (packed or not). The model below, though, is able to handle different

configurations:

• Its rnn_layer argument allows you to use whichever recurrent layer you

prefer.

• The **kwargs argument allows you to further configure the recurrent layer

(using num_layers and bidirectional arguments, for example).

• The output dimension of the recurrent layer is automatically computed to

build a matching linear layer.

• If the input is a packed sequence, it handles the unpacking and fancy indexing

to retrieve the actual last hidden state.

Model Configuration

1 class SquareModelOne(nn.Module):

2 def __init__(self, n_features, hidden_dim, n_outputs,

3 rnn_layer=nn.LSTM, **kwargs):

4 super(SquareModelOne, self).__init__()

5 self.hidden_dim = hidden_dim

6 self.n_features = n_features

7 self.n_outputs = n_outputs

8 self.hidden = None

Putting It All Together | 681

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!