23.02.2015 Views

Machine Learning - DISCo

Machine Learning - DISCo

Machine Learning - DISCo

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

LEARN-ONE-RULE algorithm:<br />

FOIL algorithm, comparison with, 287<br />

ID3 algorithm, search comparison with,<br />

277<br />

rule performance in, 282<br />

rule post-pruning in, 281<br />

variations of, 279-280,286<br />

<strong>Learning</strong>:<br />

human. See Human learning<br />

machine. See <strong>Machine</strong> learning<br />

<strong>Learning</strong> algorithms<br />

consistent learners, 162-163<br />

design of, 9-11, 17<br />

domain-independent, 336<br />

error differences between two, 145-15 1<br />

search of hypothesis space, 24<br />

<strong>Learning</strong> problems, 2-5, 17<br />

computational theory of, 201-202<br />

in inductive-analytical learning, 337-338<br />

<strong>Learning</strong> rate, 88, 91<br />

<strong>Learning</strong> systems:<br />

design of, 5-14, 17<br />

program modules, 11-1' :<br />

Least mean squares algori ,m. See LMS<br />

algorithm<br />

Least-squared error hypothesis:<br />

classifiers for, 198<br />

gradient descent in, 167<br />

maximum likelihood (ML) hypothesis<br />

and, 164-167<br />

Leave-one-out cross-validation, 235<br />

Legal case reasoning, case-based reasoning<br />

in, 240<br />

LEMMA-ENUMERATOR algorithm, 324<br />

Lifelong learning, 370<br />

Line search, 119<br />

Linear programming, as weight update<br />

algorithm, 95<br />

Linearly separable sets, 86, 89, 95<br />

LIST-THEN-ELLMINATE algorithm, 30<br />

Literal, 284, 285<br />

LMS algorithm, 11, 15<br />

inductive bias of, 64<br />

LMS weight update rule. See Delta rule<br />

Local method, 234<br />

Locally weighted regression, 231,<br />

236-238, 246<br />

limitations of, 238<br />

weight update rules in, 237-238 164-167<br />

Logical constants, 284, 285<br />

Logical terms, 284, 285<br />

Logistic function, 96, 104<br />

Lookup table:<br />

function approximation algorithms as<br />

substitute, 384<br />

neural network as substitute, 384<br />

Lower bound on sample complexity,<br />

217-218<br />

m-estimate of probability, 179-180, 198,<br />

282<br />

<strong>Machine</strong> learning, 15. See also entries<br />

beginning with <strong>Learning</strong><br />

applications, 3, 17<br />

definition of, 2<br />

influence of other disciplines on, 4, 17<br />

search of hypothesis space, 14-15, 18<br />

Manufacturing process control, 17<br />

MAP hypothesis. See Maximum<br />

a posteriori hypothesis<br />

MAP LEARNING algorithm, BRUTE-FORCE.<br />

See BRUTE-FORCE MAP LEARNING<br />

algorithm<br />

Markov decision processes (MDP), 370,<br />

387<br />

applications of, 386<br />

MARKUS, 302<br />

MARVIN, 302<br />

Maximally general hypotheses,<br />

computation by CANDIDATE-<br />

ELIMINATION algorithm, 31,<br />

46<br />

Maximally specific hypotheses:<br />

computation by CANDIDATE-<br />

ELIMINATION algorithm, 31,<br />

46<br />

computation by FIND-S algorithm,<br />

26-28, 62-63<br />

Maximum a posteriori (MAP) hypothesis,<br />

157, 197. See also BRUTE-FORCE<br />

MAP LEARNING algorithm<br />

naive Bayes classifier and, 178<br />

output of consistent learners, 162-163<br />

Maximum likelihood (ML) hypothesis, 157<br />

EM algorithm search for, 194-195<br />

least-squared error hypothesis and,<br />

- -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!