20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

480

INDEX

bool tensors 51

Boolean indexing 302

Bottleneck modules 23

bounding boxes 372–375

boundingBox_a 373

boxed numeric values 43

boxing 50

broadcasting 47, 111, 155

buffer protocol 64

_build2dTransformMatrix

function 386

byte pair encoding method 97

C

C++

C++ API 468–472

LibTorch 465–472

running JITed models from

C++ 465–468

__call__ method 152–153

cancer detector project

classification model training

disconnect 315–316

evaluating the model

308–309

first-pass neural network

design 289–295

foundational model and

training loop 280–282

graphing training

metrics 309–314

main entry point for

application 282–284

outputting performance

metrics 300–304

pretraining setup and

initialization 284–289

running training

script 304–307

training and validating the

model 295–300

CT scans 238–241

data augmentation 346–354

improvement from

352–354

techniques 347–352

data loading

loading individual CT

scans 262–265

locating nodules 265–271

parsing LUNA's annotation

data 256–262

raw CT data files 256

straightforward dataset

implementation

271–277

deployment

enterprise serving of

PyTorch models 476

exporting models 455–458

interacting with PyTorch

JIT 458–465

LibTorch 465–472

mobile 472–476

serving PyTorch

models 446–454

difficulty of 245–247

end-to-end analysis 405–407

bridging CT segmentation

and nodule candidate

classification 408–416

diagnosis script 432–434

independence of validation

set 407–408

predicting

malignancy 417–431

quantitative validation

416–417

training, validation, and test

sets 433–434

false positives and false

negatives 320–322

high-level plan for

improvement 319–320

LUNA Grand Challenge data

source

downloading 251–252

overview 251

metrics

graphing positives and

negatives 322–333

ideal dataset 334–344

nodules 249–250

overview 236–237

preparing for large-scale

projects 237–238

second model 358–360

segmentation

semantic

segmentation 361–366

types of 360–361

updating dataset for

369–386

updating model for

366–369

updating training script

for 386–399

structure of 241–252

candidate_count 412

candidateInfo_list 381, 414

candidateInfo_tup 373

CandidateInfoTuple data

structure 377

candidateLabel_a array 411

categorical values 80

center_crop 464

center_index - index_radius 373

center_index +

index_radius 373

chain rule 123

ChainDataset 174

channel dimension 76

channels 197

CIFAR-10 dataset 165–173

data transforms 168–170

Dataset class 166–167

downloading 166

normalizing data 170–172

CIFAR-100 166

cifar2 object 174

CImg library 465–468

class balancing 339–341

class_index 180

classification

classifying by diameter

419–422

to reduce false positives

412–416

classification model training

disconnect 315–316

evaluating the model

308–309

first-pass neural network

design 289–295

converting from convolution

to linear 294–295

core convolutions 290–292

full model 293–295

initialization 295

foundational model and training

loop 280–282

graphing training

metrics 309–314

adding TensorBoard support

to the metrics logging

function 313–314

running TensorBoard

309–313

writing scalars to

TensorBoard 314

main entry point for

application 282–284

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!