20.04.2014 Views

Combining Pattern Classifiers

Combining Pattern Classifiers

Combining Pattern Classifiers

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

xiv<br />

PREFACE<br />

learning communities, albeit scattered across various literature sources and disguised<br />

under different names and notations. Yet some of the methods and algorithms<br />

in the book are less well known. My choice was guided by how intuitive, simple, and<br />

effective the methods are. I have tried to give sufficient detail so that the methods can<br />

be reproduced from the text. For some of them, simple Matlab code is given as well.<br />

The code is not foolproof nor is it optimized for time or other efficiency criteria. Its<br />

sole purpose is to enable the reader to experiment. Matlab was seen as a suitable<br />

language for such illustrations because it often looks like executable pseudocode.<br />

I have refrained from making strong recommendations about the methods and<br />

algorithms. The computational examples given in the book, with real or artificial<br />

data, should not be regarded as a guide for preferring one method to another. The<br />

examples are meant to illustrate how the methods work. Making an extensive experimental<br />

comparison is beyond the scope of this book. Besides, the fairness of such a<br />

comparison rests on the conscientiousness of the designer of the experiment. J.A.<br />

Anderson says it beautifully [89]<br />

There appears to be imbalance in the amount of polish allowed for the techniques.<br />

There is a world of difference between “a poor thing – but my own” and “a poor<br />

thing but his”.<br />

The book is organized as follows. Chapter 1 gives a didactic introduction into the<br />

main concepts in pattern recognition, Bayes decision theory, and experimental comparison<br />

of classifiers. Chapter 2 contains methods and algorithms for designing the<br />

individual classifiers, called the base classifiers, to be used later as an ensemble.<br />

Chapter 3 discusses some philosophical questions in combining classifiers including:<br />

“Why should we combine classifiers?” and “How do we train the ensemble?”<br />

Being a quickly growing area, combining classifiers is difficult to put into unified<br />

terminology, taxonomy, or a set of notations. New methods appear that have to<br />

be accommodated within the structure. This makes it look like patchwork rather<br />

than a tidy hierarchy. Chapters 4 and 5 summarize the classifier fusion methods<br />

when the individual classifiers give label outputs or continuous-value outputs.<br />

Chapter 6 is a brief summary of a different approach to combining classifiers termed<br />

classifier selection. The two most successful methods for building classifier ensembles,<br />

bagging and boosting, are explained in Chapter 7. A compilation of topics is<br />

presented in Chapter 8. We talk about feature selection for the ensemble, error-correcting<br />

output codes (ECOC), and clustering ensembles. Theoretical results found in<br />

the literature are presented in Chapter 9. Although the chapter lacks coherence, it<br />

was considered appropriate to put together a list of such results along with the details<br />

of their derivation. The need of a general theory that underpins classifier combination<br />

has been acknowledged regularly, but such a theory does not exist as yet.<br />

The collection of results in Chapter 9 can be regarded as a set of jigsaw pieces awaiting<br />

further work. Diversity in classifier combination is a controversial issue. It is a<br />

necessary component of a classifier ensemble and yet its role in the ensemble performance<br />

is ambiguous. Little has been achieved by measuring diversity and

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!