18.11.2014 Views

Pit Pattern Classification in Colonoscopy using Wavelets - WaveLab

Pit Pattern Classification in Colonoscopy using Wavelets - WaveLab

Pit Pattern Classification in Colonoscopy using Wavelets - WaveLab

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3 Texture classification<br />

After the MFNN has been tra<strong>in</strong>ed successfully it is able to discrim<strong>in</strong>ate between normal and<br />

abnormal texture regions by form<strong>in</strong>g hyperplane decision boundaries <strong>in</strong> the pattern space.<br />

In [43], [33] and [31] the ANN is fed with feature vectors, which conta<strong>in</strong> wavelet coefficient<br />

co-occurrence matrix based features based on the subbands result<strong>in</strong>g from a 1-level DWT.<br />

Karkanis et al. use <strong>in</strong> [25] a very similar approach, but consider only the subband exhibit<strong>in</strong>g<br />

the maximum variance for feature vector creation.<br />

Accord<strong>in</strong>g to the results <strong>in</strong> the publications the classification results are promis<strong>in</strong>g s<strong>in</strong>ce<br />

the system used has been proven capable to classify and locate regions of lesions with a<br />

success rate of 94 up to 99%.<br />

The approach <strong>in</strong> [44] uses an artificial neural network for classification too. The color<br />

features extracted <strong>in</strong> this approach are used as <strong>in</strong>put for a Backpropagation Neural Network<br />

(BPNN) which is tra<strong>in</strong>ed us<strong>in</strong>g various tra<strong>in</strong><strong>in</strong>g algorithms such as resilient propagation,<br />

scaled conjugate gradient algorithm and the Marquardt algorithm.<br />

Depend<strong>in</strong>g on the tra<strong>in</strong><strong>in</strong>g algorithm used for the BPNN and the comb<strong>in</strong>ation of features<br />

which is used as <strong>in</strong>put for the BPNN this approach reaches an average classification accuracy<br />

between 89 and 98%.<br />

The last of the presented methods also us<strong>in</strong>g artificial neural networks is presented <strong>in</strong> [13].<br />

The 84 features extracted with this method are used as <strong>in</strong>put for a multilayer backpropagation<br />

neural network. This results <strong>in</strong> a classification accuracy between 85 and 90%.<br />

3.3.3 SVM<br />

The SVM classifier, further described <strong>in</strong> [7, 20], is another, more recent technique for data<br />

classification, which has already been used for example by Rajpoot <strong>in</strong> [36] to classify texture<br />

us<strong>in</strong>g wavelet features.<br />

The basic idea beh<strong>in</strong>d support vector mach<strong>in</strong>es is to construct classify<strong>in</strong>g hyperplanes<br />

which are optimal for separation of given data. Apart from that the hyperplanes constructed<br />

from some tra<strong>in</strong><strong>in</strong>g data should have the ability to classify any unknown data presented to<br />

the classifier as well as possible.<br />

In figure 3.2(a) an example 2-dimensional feature space with l<strong>in</strong>ear separable features<br />

of two classes A (filled circles) and B (outl<strong>in</strong>ed circles) is shown. The black l<strong>in</strong>e runn<strong>in</strong>g<br />

through the feature space is the hyperplane separat<strong>in</strong>g the feature space <strong>in</strong>to to half spaces.<br />

Additionally on the left side of the hyperplane as well as on the right side of the hyperplane<br />

two other l<strong>in</strong>es can be seen - drawn <strong>in</strong> gray <strong>in</strong> figure 3.2. These l<strong>in</strong>es are boundaries, which<br />

have the same distance h to the separat<strong>in</strong>g hyperplane at any po<strong>in</strong>t.<br />

These boundaries are important, as feature vectors are allowed to be on the boundaries,<br />

but not <strong>in</strong>side them. Therefore all feature vectors must always satisfy the constra<strong>in</strong>t, that<br />

the distances to the hyperplane are always equal or greater than h. S<strong>in</strong>ce there are many<br />

classify<strong>in</strong>g hyperplanes possible the SVM algorithm now tries to maximize the value of h,<br />

such that only one possible hyperplane rema<strong>in</strong>s.<br />

30

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!