22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Figure 9.34 - Losses—encoder + decoder + self-attention

The losses are worse now—the model using cross-attention only was performing

better than that. What about the predictions?

Visualizing Predictions

Let’s plot the predicted coordinates and connect them using dashed lines, while

using solid lines to connect the actual coordinates, just like before:

fig = sequence_pred(sbs_seq_selfattn, full_test, test_directions)

Figure 9.35 - Predictions—encoder + decoder + self-attention

Well, that’s a bit disappointing; the triangles made a comeback!

Self-Attention | 763

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!