22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Equation 3.1 - A linear regression model with two features

There is one obvious problem with the model above: Our labels (y) are discrete;

that is, they are either zero or one; no other value is allowed. We need to change

the model slightly to adapt it to our purposes.

"What if we assign the positive outputs to one and the negative

outputs to zero?"

Makes sense, right? We’re already calling them positive and negative classes

anyway; why not put their names to good use? Our model would look like this:

Equation 3.2 - Mapping a linear regression model to discrete labels

Logits

To make our lives easier, let’s give the right-hand side of the equation above a

name: logit (z).

Equation 3.3 - Computing logits

The equation above is strikingly similar to the original linear regression model, but

we’re calling the resulting value z, or logit, instead of y, or label.

"Does it mean a logit is the same as linear regression?"

Not quite—there is one fundamental difference between them: There is no error

term (epsilon) in Equation 3.3.

"If there is no error term, where does the uncertainty come from?"

I am glad you asked :-) That’s the role of the probability: Instead of assigning a data

Model | 211

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!