29.04.2013 Views

TESI DOCTORAL - La Salle

TESI DOCTORAL - La Salle

TESI DOCTORAL - La Salle

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.2. Random hierarchical consensus architectures<br />

running time. Secondly, notice that flat consensus tends to be computationally optimal in<br />

those data sets having small cluster ensembles even in high diversity scenarios (e.g. Iris,<br />

Balance or MFeat). Thirdly, for data collections containing a large number of objects n<br />

(such as PenDigits), only the HGPA and MCLA consensus functions are executable in our<br />

experimental conditions (as they are the only whose complexity scales linearly with the<br />

data set size, see appendix A.5). And last, notice the predominance of RHCA variants<br />

with s =2ands = 3 stages among the fastest ones, which seems to indicate that, from a<br />

computational perspective, RHCA variants few stages are more efficient, despite consensus<br />

processes are conducted on large mini-cluster ensembles.<br />

Most of these observations can be extrapolated to the case of the fully parallel consensus<br />

implementation (table 3.5), where we can observe a pretty overwhelming prevalence of<br />

RHCA variants over flat consensus, a trend that was already reported earlier in this section<br />

and also in appendix C.2.<br />

In the remains of this work, experiments concerning random hierarchical consensus<br />

architectures have been limited to those RHCA variants of minimum estimated running<br />

time for the sake of brevity.<br />

66

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!