20.03.2021 Views

Deep-Learning-with-PyTorch

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

122 CHAPTER 5 The mechanics of learning

5.4.5 Visualizing (again)

Let’s revisit something we did right at the start: plotting our data. Seriously, this is the

first thing anyone doing data science should do. Always plot the heck out of the data:

# In[22]:

%matplotlib inline

from matplotlib import pyplot as plt

t_p = model(t_un, *params)

Remember that we’re training on the

normalized unknown units. We also

use argument unpacking.

fig = plt.figure(dpi=600)

plt.xlabel("Temperature (°Fahrenheit)")

plt.ylabel("Temperature (°Celsius)")

plt.plot(t_u.numpy(), t_p.detach().numpy())

plt.plot(t_u.numpy(), t_c.numpy(), 'o')

But we’re plotting the

raw unknown values.

We are using a Python trick called argument unpacking here: *params means to pass the

elements of params as individual arguments. In Python, this is usually done with lists

or tuples, but we can also use argument unpacking with PyTorch tensors, which are

split along the leading dimension. So here, model(t_un, *params) is equivalent to

model(t_un, params[0], params[1]).

This code produces figure 5.9. Our linear model is a good model for the data, it

seems. It also seems our measurements are somewhat erratic. We should either call

our optometrist for a new pair of glasses or think about returning our fancy thermometer.

25

temperature (°CELSIUS)

20

15

10

5

0

-5

20 30 40 50 60 70 80

temperature (°fahrenheit)

Figure 5.9

The plot of our linear-fit model (solid line) versus our input data (circles)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!