20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

INDEX 485

machine learning: gradient

descent (continued)

computing derivatives 115

data visualization 122

decreasing loss 113–114

defining gradient

function 116

Iterating to fit model

116–119

normalizing inputs

119–121

loss function 109–112

modeling 104–106

parameter estimation

106–109

choosing linear

model 108–109

data gathering 107–108

data visualization 108

example problem 107

switching to PyTorch 110–112

main method 283, 471

malignancy classification 407

malignancy model 407

--malignancy-path argument 432

MalignancyLunaDataset

class 418

malignant classification 413

map_location keyword

argument 217

Mask R-CNN 246

Mask R-CNN models 465

masked arrays 302

masks

caching chunks of mask in

addition to CT 376

calling mask creation during

CT initialization 375

constructing 302–304

math ops 53

Matplotlib 172, 247, 431

max function 26

max pooling 203

mean square loss 111

memory bandwidth 384

Mercator projection map 267

metadata, tensor 55–62

contiguous tensors 60–62

transposing in higher

dimensions 60

transposing without

copying 58–59

views of another tensor’s

storage 56–58

MetaIO format 263

metrics

graphing positives and

negatives 322–333

F1 score 328–332

performance 332–333

precision 326–328, 332

recall 324, 327–328, 332

ideal dataset 334–344

class balancing 339–341

contrasting training with

balanced LUNA

Dataset to previous

runs 341–343

making data look less like

the actual and more

like the ideal 336–341

samplers 338–339

symptoms of

overfitting 343–344

metrics_dict 303, 314

METRICS_PRED_NDX

values 302

metrics_t parameter 298, 301

metrics_t tensor 428

millimeter-based coordinate

system 265

minibatches 129, 184–185

mirroring 348–349

MIT license 367

mixed-precision training 475

mixup 435

MLflow 476

MNIST dataset 165–166

mobile deployment 472–476

mode_str argument 301

model design 217–229

comparing designs 228–229

depth of network 223–228

building very deep

models 226–228

initialization 228

skip connections 223–226

outdated 229

regularization 219–223

batch normalization

222–223

dropout 220–222

weight penalties 219–220

width of network 218–219

model function 131, 142

Model Runner function

450–451

model_runner function

453–454

model.backward() method 159

Model.load method 474

model.parameters()

method 159

model.state_dict() function 397

model.train() method 223

ModelRunner class 452

models module 22

modules 151

MS COCO dataset 35

MSE (Mean Square Error)

157, 180, 182

MSELoss 175

multichannel images 197

multidimensional arrays,

tensors as 42

multitask learning 436

mutating ops 53

N

N dimension 89

named tensors 46, 48–49

named_parameters method 159

names argument 48

NDET (Nodule detection) 251

ndx integer 272

needs_processing event

452, 454

needs_processing.

ModelRunner 452

neg_list 418

neg_ndx 340

negLabel_mask 303

negPred_mask 302

netG model 30

neural networks

__call__ method 152–153

activation functions 145–149

capping output range 146

choosing 148–149

compressing output

range 146–147

composing multilayer

networks 144

error function 144–145

first-pass, for cancer

detector 289–295

converting from convolution

to linear 294–295

core convolutions 290–292

full model 293–295

initialization 295

inspecting parameters

159–161

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!