22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

A logistic regression always separates two classes with a

straight line.

Our model produced a straight line that does quite a good job of separating red and

blue points, right? Well, it was not that hard anyway, since the blue points were

more concentrated on the bottom right corner, while the red points were mostly

on the top left corner. In other words, the classes were quite separable.

The more separable the classes are, the lower the loss will be.

Now we can make sense of the validation loss, being lower than the training loss. In

the validation set, the classes are more separable than in the training set. The

decision boundary obtained using the training set can do an even better job

separating red and blue points. Let’s check it out, plotting the validation set against

the same contour plots as above:

Figure 3.8 - Decision boundary (validation dataset)

See? Apart from three points, two red and one blue, which are really close to the

decision boundary, the data points are correctly classified. More separable,

indeed.

238 | Chapter 3: A Simple Classification Problem

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!