22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Output

OrderedDict([('weight_ih_l0', tensor([[ 0.6627, -0.4245],

[ 0.5373, 0.2294]])),

('weight_hh_l0', tensor([[-0.4015, -0.5385],

[-0.1956, -0.6835]])),

('bias_ih_l0', tensor([0.4954, 0.6533])),

('bias_hh_l0', tensor([-0.3565, -0.2904])),

('weight_ih_l1', tensor([[-0.6701, -0.5811],

[-0.0170, -0.5856]])),

('weight_hh_l1', tensor([[ 0.1159, -0.6978],

[ 0.3241, -0.0983]])),

('bias_ih_l1', tensor([-0.3163, -0.2153])),

('bias_hh_l1', tensor([ 0.0722, -0.3242]))])

From the RNN’s state dictionary, we can see it has two groups of weights and

biases, one for each layer, with each layer indicated by its corresponding suffix (_l0

and _l1).

Now, let’s create two simple RNNs and use the weights and biases above to set

their weights accordingly. Each RNN will behave as one of the layers of the stacked

one:

rnn_layer0 = nn.RNN(input_size=2, hidden_size=2, batch_first=True)

rnn_layer1 = nn.RNN(input_size=2, hidden_size=2, batch_first=True)

rnn_layer0.load_state_dict(dict(list(state.items())[:4]))

rnn_layer1.load_state_dict(dict([(k[:-1]+'0', v)

for k, v in

list(state.items())[4:]]))

Output

<All keys matched successfully>

Now, let’s make a batch containing one sequence from our synthetic dataset (thus

having shape (N=1, L=4, F=2)):

Recurrent Neural Networks (RNNs) | 609

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!