Rating Models and Validation - Oesterreichische Nationalbank
Rating Models and Validation - Oesterreichische Nationalbank
Rating Models and Validation - Oesterreichische Nationalbank
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
adapts the network according to any deviations it finds. Probably the most commonly<br />
used method of making such changes in networks is the adjustment of<br />
weights between neurons. These weights indicate how important a piece of<br />
information is considered to be for the networkÕs output. In extreme cases,<br />
the link between two neurons will be deleted by setting the corresponding<br />
weight to zero.<br />
A classic learning algorithm which defines the procedure for adjusting<br />
weights is the back-propagation algorithm. This term refers to Òa gradient<br />
descent method which calculates changes in weights according to the errors<br />
made by the neural network.Ó 30<br />
In the first step, output results are generated for a number of data records.<br />
The deviation of the calculated output od from the actual output td is measured<br />
using an error function. The sum-of-squares error function is frequently<br />
used in this context:<br />
e ¼ 1 X<br />
ðtd odÞ<br />
2<br />
2<br />
d<br />
The calculated error can be back-propagated <strong>and</strong> used to adjust the relevant<br />
weights. This process begins at the output layer <strong>and</strong> ends at the input layer. 31<br />
When training an artificial neural network, it is important to avoid what is<br />
referred to as overfitting. Overfitting refers to a situation in which an artificial<br />
neural network processes the same learning data records again <strong>and</strong> again until it<br />
begins to recognize <strong>and</strong> ÒmemorizeÓ specific data structures within the sample.<br />
This results in high discriminatory power in the learning sample used, but low<br />
discriminatory power in unknown samples. Therefore, the overall sample used<br />
in developing such networks should definitely be divided into a learning, testing<br />
<strong>and</strong> a validation sample in order to review the networkÕs learning success using<br />
ÒunknownÓ samples <strong>and</strong> to stop the training procedure in time. This need to<br />
divide up the sample also increases the quantity of data required.<br />
Application of Artificial Neural Networks<br />
Neural networks are able to process both quantitative <strong>and</strong> qualitative data<br />
directly, which makes them especially suitable for the depiction of complex rating<br />
models which have to take various information categories into account.<br />
Although artificial neural networks regularly demonstrate high discriminatory<br />
power <strong>and</strong> do not involve special requirements regarding input data, these rating<br />
models are still not very prevalent in practice. The reasons for this lie in the<br />
complex network modeling procedures involved <strong>and</strong> the Òblack boxÓ nature of<br />
these networks. As the inner workings of artificial neural networks are not<br />
transparent to the user, they are especially susceptible to acceptance problems.<br />
One example of an artificial neural network used in practice is the BBR<br />
(Baetge-Bilanz-<strong>Rating</strong> â BP-14 used for companies which prepare balance<br />
sheets. This artificial neural network uses 14 different figures from annual financial<br />
statements as input parameters <strong>and</strong> compresses them into an ÒN-score,Ó on<br />
the basis of which companies are assigned to rating classes.<br />
30 See HEITMANN, C., Neuro-Fuzzy, p. 85.<br />
31 Cf. HEITMANN, C., Neuro-Fuzzy, p. 86ff.<br />
<strong>Rating</strong> <strong>Models</strong> <strong>and</strong> <strong>Validation</strong><br />
Guidelines on Credit Risk Management 47