22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Output

{Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1)): 'conv1',

ReLU(): 'relu1',

MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1,

ceil_mode=False): 'maxp1',

Flatten(): 'flatten',

Linear(in_features=16, out_features=10, bias=True): 'fc1',

ReLU(): 'relu2',

Linear(in_features=10, out_features=3, bias=True): 'fc2'}

A dictionary is perfect for that: The hook function will take the layer instance as an

argument and look its name up in the dictionary!

OK, it is time to create a real hook function:

visualization = {}

def hook_fn(layer, inputs, outputs):

name = layer_names[layer]

visualization[name] = outputs.detach().cpu().numpy()

It is actually quite simple: It looks up the name of the layer and uses it as a key to a

dictionary defined outside the hook function, which will store the outputs

produced by the hooked layer. The inputs are being ignored in this function.

We can make a list of the layers we’d like to get the outputs from, loop through the

list of named modules, and hook our function to the desired layers, keeping the

handles in another dictionary:

layers_to_hook = ['conv1', 'relu1', 'maxp1', 'flatten',

'fc1', 'relu2', 'fc2']

handles = {}

for name, layer in modules:

if name in layers_to_hook:

handles[name] = layer.register_forward_hook(hook_fn)

Visualizing Filters and More! | 399

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!