A Hybrid Approach for Feature Selection Based on Genetic Algorithm and Recursive Feature Elimination

A Hybrid Approach for Feature Selection Based on Genetic Algorithm and Recursive Feature Elimination

Pooja Rani, Rajneesh Kumar, Anurag Jain, Sunil Kumar Chawla
Copyright: © 2021 |Pages: 22
DOI: 10.4018/IJISMD.2021040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Machine learning has become an integral part of our life in today's world. Machine learning when applied to real-world applications suffers from the problem of high dimensional data. Data can have unnecessary and redundant features. These unnecessary features affect the performance of classification systems used in prediction. Selection of important features is the first step in developing any decision support system. In this paper, the authors have proposed a hybrid feature selection method GARFE by integrating GA (genetic algorithm) and RFE (recursive feature elimination) algorithms. Efficiency of proposed method is analyzed using support vector machine classifier on the scale of accuracy, sensitivity, specificity, precision, F-measure, and execution time parameters. Proposed GARFE method is also compared to eight other feature selection methods. Results demonstrate that the proposed GARFE method has increased the performance of classification systems by removing irrelevant and redundant features.
Article Preview
Top

Introduction

Machine learning in computer science is a recent trend that is used to develop a variety of decision support systems in various fields. When decision support systems are used in real-world applications, high dimensional data is a common problem. High dimensional data can increase complexity and reduce the accuracy of the system. This problem is also known as curse of dimensionality (Rao et al., 2019). Feature selection methods reduce the number of features. It selects relevant features removing irrelevant features. Reduced numbers of features increase accuracy of system. It also reduces the complexity of the system. Removing redundant and noisy features also help in decreasing computation time (Bhattacharya et al., 2020). Feature selection methods can be categorized into three types:

  • Filter method: Filter method filters the features before applying them to the classification algorithm. It performs the ranking of the features using general characteristics of data. The criteria used for ranking of features are independent of the machine learning classifier. This method of feature selection is fast as compared to other methods. Therefore, for larger datasets, this method is best. Ranking of features is done individually, and interaction among features is not considered, so important features may not be selected (Kumar & Rani, 2020).

  • Wrapper method: Wrapper method selects features by training the model multiple times on a different subset of features and selecting the best subset. In this method, interaction among features is considered, so it ensures the selection of the most important features. It is a very complicated method. The selection of the features is an integral part of learning, so it depends upon the classification method. The problem of overfitting can occur with this method (Lamba et al., 2021).

  • Embedded method: In embedded methods, feature selection and training processes are embedded together. Features are selected while training the model. From the full set of features, different feature subsets are created. The efficiency of these feature subsets is found by training the model and evaluating this model on these features. The limitation of this method is that features selected depend upon the machine-learning classifier used. Therefore, the set of features will change if the training algorithm is changed (Chandrashekar & Sahin, 2014).

Description of some commonly used feature selection algorithms are as follows:

  • Pearson Correlation algorithm: This method is based upon the filter method of feature selection. Correlation is a measurement of dependency between two features. The correlation has a value in the range of -1 to 1. Pearson Correlation method selects those features having a higher correlation with the target class (Wosiak & Zakrzewska,2018).

  • Correlation between Features algorithm: This method is based upon the filter method of feature selection. In this method, those features are selected which are having a high correlation between themselves. If two features are highly dependent upon each other, it means there is redundancy in features, and one feature can be removed (Ottom & Alshorman, 2019).

  • Feature Importance using Extra Tree classifier algorithm: It is an embedded method of feature selection where features are selected by using multiple decision trees. The number of trees used is passed as a parameter to this algorithm. An ensemble of decision trees is used to estimate the importance of different features, and less important features are removed (Hemphill et al., 2014).

  • Chi-Square Algorithm: This method is based upon the filter method of feature selection. In this algorithm, the chi-square score is calculated between each feature and the target class, which is a measurement of the divergence between observed and expected values. The formula for chi-square is:

    IJISMD.2021040102.m01

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 8 Issues (2022): 7 Released, 1 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing