CLASSIFICATION AND PREDICTION - Universität Wien
CLASSIFICATION AND PREDICTION - Universität Wien
CLASSIFICATION AND PREDICTION - Universität Wien
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Peter Brezany Institut für Softwarewissenschaft, WS 2002 2<br />
Introduction (2)<br />
People are good at generalizing from experience.<br />
Computers usually excel at following explicit instructions over and over.<br />
Slide 3<br />
NN bridge this gap by modeling on a computer, the neural connections in human brains.<br />
Their ability to generalize and learn from data mimics our own ability to learn from<br />
experience.<br />
This ability is useful for data mining.<br />
Drawback: The results of training a NN are internal weights distributed throughout the<br />
network. These weights provide no more insight into why the solution is valid than asking<br />
many human experts why a particular decision is the right decision. They just know that it<br />
is.<br />
A Bit of History<br />
1940 (neurologist Warren McCulloch and logician Walter Pits) – the original work on how<br />
neurons work (no digital computers available at that time)<br />
Slide 4<br />
1950s - computer scientists implemented models called perceptrons based on the work of<br />
McCulloch and Pits - some limited successes with perceptrons in the laboratory, but the<br />
results were disappointing for general problem-solving. One of the reasons: there were no<br />
powerful computers available<br />
1970s - the study of NN implementations on computers slowed down drastically.<br />
1982 - John Hopfield invented backpropagation, a way of training NN – a renaissance in NN<br />
research.<br />
1980s - research moved from the labs into the commercial world.<br />
NN have been applied in virtually every industry field.