22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Let’s visualize it.

Figure 4.6 - Deep-ish model

By the way, in the figure above, the subscripts for both w and z represent the zerobased

indices for layer and unit: In the output layer, for instance, w 20 represents the

weights corresponding to the first unit (#0) of the third layer (#2).

What’s happening here? Let’s work out the forward pass; that is, the path from

inputs (x) to output (y):

1. An image is flattened to a tensor with 25 features, from x 0 to x 24 (not depicted

in the figure above).

2. The 25 features are forwarded to each of the five units in Hidden Layer #0.

3. Each unit in Hidden Layer #0 use its weights, from w 00 to w 04 , and the features

from the Input Layer to compute its corresponding outputs, from z 00 to z 04 .

4. The outputs of Hidden Layer #0 are forwarded to each of the three units in

Hidden Layer #1 (in a way, the outputs of Hidden Layer #0 work as if they were

features to Hidden Layer #1).

5. Each unit in Hidden Layer #1 uses its weights, from w 10 to w 12 , and the z 0 values

from the preceding hidden layer to compute its corresponding outputs, from z 10

to z 12 .

6. The outputs of Hidden Layer #1 are forwarded to the single unit in the output

layer (again, the outputs of Hidden Layer #1 work as if they were features to the

Output Layer).

7. The unit in the Output Layer uses its weights (w 20 ) and the z 1 values from the

302 | Chapter 4: Classifying Images

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!