22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Figure 11.20 - Losses—Transformer + GloVe embeddings

Looks like our model started overfitting really quickly since the validation loss

barely improves, if at all, after the third epoch. Let’s check its accuracy on the

validation (test) set:

StepByStep.loader_apply(test_loader, sbs_transf.correct)

Output

tensor([[410, 440],

[300, 331]])

That’s 92.09% accuracy. Well, that’s good, but not so much better than the simple

bag-of-embeddings model as you might expect from a mighty Transformer, right?

Let’s see what our model is actually paying attention to.

Visualizing Attention

Instead of using sentences from the validation (test) set, let’s come up with brand

new, totally made-up sentences of our own:

sentences = ['The white rabbit and Alice ran away',

'The lion met Dorothy on the road']

inputs = glove_tokenizer(sentences, add_special_tokens=False,

return_tensors='pt')['input_ids']

inputs

Word Embeddings | 945

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!