29.04.2013 Views

TESI DOCTORAL - La Salle

TESI DOCTORAL - La Salle

TESI DOCTORAL - La Salle

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

D.1. Experiments on consensus-based self-refining<br />

resources —see appendix A.6).<br />

Moreover, the clustering solution deemed as the optimal by the supraconsensus function<br />

described in section 4.1 in a majority of experiment runs is highlighted by means of a vertical<br />

green dashed line, so that its performance can be qualitatively evaluated at a glance.<br />

D.1.1 Iris data set<br />

Figure D.1 presents the results of the self-refining consensus procedure applied on the Iris<br />

data set. As regards the results obtained using the CSPA consensus function, we can see<br />

that self-refining introduces no variations with respect to the quality of the non-refined<br />

consensus clustering solution λc in the case of the flat and RHCA consensus architectures.<br />

In contrast, slight but important φ (NMI) gains are obtained in the case of DHCA with the<br />

refined clustering solutions λ c 20 and λ c 40. Unfortunately, the supraconsensus function fails<br />

to select in this case one of the highest quality clustering solutions. A very similar situation is<br />

observed on the self-refining experiments based on the EAC, ALSAD and SLSAD consensus<br />

functions.<br />

Examples in which self-refining and supraconsensus perform successfully are the ones<br />

regarding both hierarchical consensus architectures using the HGPA consensus function. In<br />

this cases, self-refined consensus clustering solutions of higher quality than the one of λc<br />

are obtained and selected by the supraconsensus function. In contrast, in the experiments<br />

basedonMCLA,little(ifany)φ (NMI) gains are obtained via refining, and supraconsensus<br />

tends to select a clustering solution of slightly lower quality than λc.<br />

D.1.2 Wine data set<br />

The results corresponding to the application of the consensus-based self-refining procedure<br />

on the Wine data set are depicted in figure D.2. As far as the refining of the consensus<br />

solution output by the flat consensus architecture (leftmost column of figure D.2), we can<br />

see that quality improvements with respect to the initial consensus clustering λc are obtained<br />

in all cases, except when the HGPA consensus function is employed. In some cases,<br />

supraconsensus manages to select the highest quality clustering, such as when consensus<br />

is based on MCLA and SLSAD, whereas suboptimal solutions are selected in other cases<br />

—see for instance the CSPA, EAC, HGPA and ALSAD boxplots.<br />

We would like to hihglight the specially good results obtained on the self-refining of<br />

the consensus output by the DHCA architecture, see the rightmost column of figure D.2.<br />

Regardless of the consensus function employed, the self-refining procedure gives rise to<br />

higher quality clustering solutions, and the supraconsensus function selects the top quality<br />

one in most cases.<br />

D.1.3 Glass data set<br />

Figure D.3 presents the results of the consensus self-refining process when applied on the<br />

Glass data collection. In this case, little φ (NMI) gains are obtained by self-refining for most<br />

consensus functions. The clearest exception is EAC, where notable quality increases are<br />

observed, specially when self-refining is applied on the consensus solutions output by the<br />

hierarchical consensus architectures (RHCA and DHCA).<br />

334

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!