20.03.2021 Views

Deep-Learning-with-PyTorch

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

482

INDEX

data loading (continued)

constructing dataset in

LunaDataset.__init__

275

rendering data 277

segregation between

training and validation

sets 275–276

data representation using

tensors

images 71–75

3D images 75–76

adding color channels 72

changing layout 73–74

loading image files 72–73

normalizing data 74–75

tabular data 77–87

categorization 83–84

loading a data tensor 78–80

one-hot encoding 81–83

real-world dataset 77–78

representing scores 81

thresholds 84–87

text 93–101

converting text to

numbers 94

one-hot encoding

characters 94–95

one-hot encoding whole

words 96–98

text embeddings 98–100

text embeddings as

blueprint 100–101

time series 87–93

adding time

dimensions 88–89

shaping data by time

period 89–90

training 90–93

Data Science Bowl 2017 438

data tensor 85

data.CIFAR10 dataset 167

DataLoader class 11, 280, 284,

288, 381, 414

DataParallel 286, 387

dataset argument 418

Dataset class 11, 166–167

Dataset subclass 173, 256,

271–273, 275, 279, 284,

339, 378, 414

dataset.CIFAR10 169

datasets module 166

deep learning

exercises 15

hardware and software

requirements 13–15

paradigm shift from 4–6

PyTorch for 6–9

how supports deep learning

projects 10–13

reasons for using 7–9

def __len__ method 272

dense tensors 65

DenseNet 226

deployment

enterprise serving of PyTorch

models 476

exporting models 455–458

ONNX 455–456

tracing 456–458

interacting with PyTorch

JIT 458–465

dual nature of PyTorch as

interface and

backend 460

expectations 458–460

scripting gaps of

traceability 464–465

TorchScript 461–464

LibTorch 465–472

C++ API 468–472

running JITed models from

C++ 465–468

mobile 472–476

serving PyTorch models

446–454

Flask server 446–448

goals of deployment

448–449

request batching 449–454

depth of network 223–228

building very deep

models 226–228

initialization 228

skip connections 223–226

device argument 64

device attribute 63–64

device variable 215

diameter_mm 258

Dice loss 389–392

collecting metrics 392

loss weighting 391

diceLoss_g 391

DICOM (Digital Imaging and

Communications in

Medicine) 76, 256, 267

DICOM UID 264

Dirac distribution 187

discrete convolution 195

discrete cross-correlations 195

discrimination 337

discriminator network 28

diskcache library 274, 384

dispatching mechanism 65

DistilBERT 475

distillation 475

DistributedDataParallel 286

doTraining function

296, 301, 314

doValidation function

301, 393, 397

downsampling 203–204

dropout 25, 220–222

Dropout module 221

DSB (Data Science Bowl) 438

dsets.py:32 260

dtype argument 64

managing 51–52

precision levels 51

specifying numeric types

with 50–51

dtype torch.float 65

dull batches 367

E

edge detection kernel 201

einsum function 48

embedding text

as blueprint 100–101

data representation with

tensors 98–100

embeddings 96, 99

end-to-end analysis 405–407

bridging CT segmentation

and nodule candidate

classification 408–416

classification to reduce false

positives 412–416

grouping voxels into

nodule candidates

411–412

segmentation 410–411

diagnosis script 432–434

independence of validation

set 407–408

predicting malignancy 417–431

classifying by

diameter 419–422

getting malignancy

information 417–418

reusing preexisting

weights 422–427

TensorBoard 428–431

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!