23.02.2015 Views

Machine Learning - DISCo

Machine Learning - DISCo

Machine Learning - DISCo

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

410 SUBJECT INDEX<br />

Normal distribution, 133, 139-140, 143,<br />

151, 165<br />

for noise, 167<br />

in paired tests, 149<br />

Occam's razor, 4, 65-66, 171<br />

Offline learning systems, 385<br />

One-sided bounds, 141, 144<br />

Online learning systems, 385<br />

Optimal brain damage approach, 122<br />

Optimal code, 172<br />

Optimal mistake bounds, 222-223<br />

Optimal policy for selecting actions,<br />

371-372<br />

Optimization problems:<br />

explanation-based learning in, 325<br />

genetic algorithms in, 256, 269<br />

reinforcement learning in, 256<br />

Output encoding in face recognition,<br />

114-1 15<br />

Output units, BACKPROPAGATION weight<br />

update rule for, 102-103<br />

Overfitting, 123<br />

in BACKPROPAGATION algorithm, 108,<br />

11&111<br />

in decision tree learning, 66-69, 76-77,<br />

111<br />

definition of, 67<br />

Minimum Description Length principle<br />

and, 174<br />

in neural network learning, 123<br />

PAC learning, 203-207, 225, 226<br />

of boolean conjunctions, 211-212<br />

definition of, 206-207<br />

training error in, 205<br />

true error in, 204-205<br />

Paired tests, 147-150, 152<br />

Parallelization in genetic algorithms, 268<br />

Partially learned concepts, 38-39<br />

Partially observable states in reinforcement<br />

learning, 369-370<br />

Perceptron training rule, 88-89, 94,95<br />

Perceptrons, 86, 95, 96, 123<br />

representation of boolean functions,<br />

VC dimension of, 219<br />

weight update rule, 88-89, 94, 95<br />

Perfect domain theory, 312-3 13<br />

Performance measure, 6<br />

Performance system, 11-12, 13<br />

Philosophy, influence on machine<br />

learning, 4<br />

Planning problems:<br />

PRODIGY in, 327<br />

case-based reasoning in, 240-241<br />

Policy for selecting actions, 370-372<br />

Population evolution, in genetic algorithms,<br />

260-262<br />

Positive literal, 284, 285<br />

Post-pruning:<br />

in decision tree learning, 68-69, 77,<br />

281<br />

in FOIL algorithm, 291<br />

in LEARN-ONE-RULE, 281<br />

Posterior probability, 155-156, 162<br />

Power law of practice, 4<br />

Power set, 40-42<br />

Predicates, 284, 285<br />

Preference bias, 64, 76, 77<br />

Prior knowledge, 155-156, 336. See also<br />

Domain theory<br />

to augment search operators, 357-361<br />

in Bayesian learning, 155<br />

derivatives of target function, 346-356,<br />

362<br />

in explanation-based learning,<br />

308-309<br />

explicit, use in learning, 329<br />

in human learning, 330<br />

initialize-the-hypothesis approach,<br />

339-346, 362<br />

in PROLOG-EBG, 313<br />

search alteration in inductive-analytical<br />

learning, 339-340, 362<br />

weighting in inductive-analytical<br />

learning, 338, 362<br />

Prioritized sweeping, 380<br />

Probabilistic reasoning, 163<br />

Probabilities:<br />

estimation of, 179-1 80<br />

formulas, 159<br />

maximum likelihood (ML) hypothesis<br />

for prediction of, 167-170<br />

87-88 probability density, 165

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!