A Parallel Computing Hybrid Approach for Feature Selection
Aguiar, A.
A Parallel Computing Hybrid Approach for Feature Selection, Proc IEEE International Conference on Computer Science and Engineering CSE, Porto, Portugal, Vol. 1, pp. 1 - 1, October, 2015.
Digital Object Identifier:
Download Full text PDF ( 372 KBs)
Abstract
The ultimate goal of feature selection is to select the smallest subset of features that yields minimum generalization error from an original set of features. This effectively reduces the feature space, and thus the complexity of classifiers. Though several algorithms have been proposed, no single one outperforms all the other in all scenarios, and the problem is still an actively
researched field. This paper proposes a new hybrid parallel approach to perform feature selection. The idea is to use a filter metric to reduce feature space, and then use an innovative
wrapper method to search extensively for the best solution. The proposed strategy is implemented on a shared memory parallel environment to speedup the process. We evaluated its parallel performance using up to 32 cores and our results show 30 times gain in speed. To test the performance of feature selection we used five datasets from the well known NIPS challenge and were able to obtain an average score of 95.90% for all solutions.