22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

"How is this different from the datasets we worked with in previous

chapters?"

It isn’t! Before, our data points were tensors with one or two elements in them;

that is, one or two features. Now, our data points are tensors with 25 elements in

them, each corresponding to a pixel / channel in the original image, as if they were

25 "features."

And, since it is not different, apart from the number of features, we can start from

what we already know about defining a model to handle a binary classification task.

Shallow Model

Guess what? It is a logistic regression!

Equation 4.1 - Logistic regression

Given 25 features, x 0 through x 24 , each corresponding to the value of a pixel in a

given channel, the model will fit a linear regression such that its outputs are logits

(z), which are converted into probabilities using a sigmoid function.

"Oh no, not this again … where are the deep models?"

Don’t worry, this section was named "Shallow Model" for a reason—in the next one,

we’ll build a deeper model with a hidden layer in it—finally!

298 | Chapter 4: Classifying Images

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!