13.07.2015 Views

Thesis - Instituto de Telecomunicações

Thesis - Instituto de Telecomunicações

Thesis - Instituto de Telecomunicações

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

5.2. USER TUNED FEATURE SELECTION 99the set of features that best separates a genuine user from an impostor, <strong>de</strong>termining afeature subset for each user. Class tuned feature selection has been used previously forbiometric application for example in [119].The overall time to run the algorithm increases with the number of users, but classificationperformance improvement justifies the increased computational cost. To reduce thecomputational bur<strong>de</strong>n, we studied the time complexity of the classification system and createdfaster versions of the employed methods. This is <strong>de</strong>scribed in the following subsections.5.2.2 Computational Improvements on Feature SelectionThe overall learning and classification process proved to be highly computationally <strong>de</strong>manding.We evaluated the time complexity of the several steps involved in the system, and ma<strong>de</strong>some algorithmic improvements in or<strong>de</strong>r to <strong>de</strong>vise strategies to <strong>de</strong>crease the required timeto obtain the results. We <strong>de</strong>scribe the weight of each of the blocks in the time complexity.Figure 5.3 shows a diagram of the steps involved in the classification system, indicating thecorresponding time complexity or<strong>de</strong>r.Figure 5.3: Blocks of the classification system with their respective time complexity or<strong>de</strong>r.The feature extraction block, when run in batch mo<strong>de</strong> for all the acquired data, requiresan amount of time proportional to the amount of data to process. Consi<strong>de</strong>ring the featureextraction operation, the processing time linearly increases with the number of users (n c )and with the number of features to extract (n f ), thus leading to a O(n f n c ) time complexity.We verified experimentally that the extraction of all the features was always faster thanthe data acquisition time for the signals studied in the present work (our research team’sslowest computer, operating at 2.0 GHz, performed well in this task).The parametric mo<strong>de</strong>l learning may require some optimization steps. In the learning foreach feature parameters we assumed conditional in<strong>de</strong>pen<strong>de</strong>nce between features, resultingin linear complexity with the number of features and users (O(n f n c ) time complexity).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!