22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Output

array([[-2. , -1.94, -1.88, ..., 3.88, 3.94, 4. ],

[-2. , -1.94, -1.88, ..., 3.88, 3.94, 4. ],

[-2. , -1.94, -1.88, ..., 3.88, 3.94, 4. ],

...,

[-2. , -1.94, -1.88, ..., 3.88, 3.94, 4. ],

[-2. , -1.94, -1.88, ..., 3.88, 3.94, 4. ],

[-2. , -1.94, -1.88, ..., 3.88, 3.94, 4. ]])

Sure, we’re somewhat cheating here, since we know the true values of b and w, so

we can choose the perfect ranges for the parameters. But it is for educational

purposes only :-)

Next, we could use those values to compute the corresponding predictions, errors,

and losses. Let’s start by taking a single data point from the training set and

computing the predictions for every combination in our grid:

dummy_x = x_train[0]

dummy_yhat = bs + ws * dummy_x

dummy_yhat.shape

Output

(101, 101)

Thanks to its broadcasting capabilities, Numpy is able to understand we want to

multiply the same x value by every entry in the ws matrix. This operation resulted

in a grid of predictions for that single data point. Now we need to do this for every

one of our 80 data points in the training set.

We can use Numpy's apply_along_axis() to accomplish this:

Look ma, no loops! Step 2 - Compute the Loss | 33

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!