22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

fs_model = nn.Sequential()

fs_model.add_module('hidden', nn.Linear(2, 2))

fs_model.add_module('activation', nn.Sigmoid())

fs_model.add_module('output', nn.Linear(2, 1))

fs_model.add_module('sigmoid', nn.Sigmoid())

Let’s take a quick look at "Hidden Layer #0," which performs an affine

transformation:

• First, it uses the weights to perform a linear transformation of the feature

space (features x 0 and x 1 ), such that the resulting feature space is a rotated,

scaled, maybe flipped, and likely sheared version of the original.

• Then, it uses the bias to translate the whole feature space to a different origin,

resulting in a transformed feature space (z 0 and z 1 ).

The equation below shows the whole operation, from inputs (x) to logits (z):

Equation B.1 - From inputs to logits using an affine transformation

It is on top of this transformed feature space that the activation function will work

its magic, twisting and turning the feature space beyond recognition.

Next, the resulting activated feature space will feed the output layer. But, if we look

at the output layer alone, it is like a logistic regression, right? This means that the

output layer will use its inputs (z 0 and z 1 ) to draw a decision boundary.

We can annotate the model diagram to, hopefully, make it more clear.

334 | Bonus Chapter: Feature Space

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!