TY - JOUR
T1 - Effective Discriminative Feature Selection With Nontrivial Solution
AU - Tao, Hong
AU - Hou, Chenping
AU - Nie, Feiping
AU - Jiao, Yuanyuan
AU - Yi, Dongyun
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2016/4
Y1 - 2016/4
N2 - Feature selection and feature transformation, the two main ways to reduce dimensionality, are often presented separately. In this paper, a feature selection method is proposed by combining the popular transformation-based dimensionality reduction method linear discriminant analysis (LDA) and sparsity regularization. We impose row sparsity on the transformation matrix of LDA through ℓ2,1 -norm regularization to achieve feature selection, and the resultant formulation optimizes for selecting the most discriminative features and removing the redundant ones simultaneously. The formulation is extended to the ℓ2,p -norm regularized case, which is more likely to offer better sparsity when 0 < p < 1. Thus, the formulation is a better approximation to the feature selection problem. An efficient algorithm is developed to solve the ℓ2,p -norm-based optimization problem and it is proved that the algorithm converges when 0 < p ≤ 2. Systematical experiments are conducted to understand the work of the proposed method. Promising experimental results on various types of real-world data sets demonstrate the effectiveness of our algorithm.
AB - Feature selection and feature transformation, the two main ways to reduce dimensionality, are often presented separately. In this paper, a feature selection method is proposed by combining the popular transformation-based dimensionality reduction method linear discriminant analysis (LDA) and sparsity regularization. We impose row sparsity on the transformation matrix of LDA through ℓ2,1 -norm regularization to achieve feature selection, and the resultant formulation optimizes for selecting the most discriminative features and removing the redundant ones simultaneously. The formulation is extended to the ℓ2,p -norm regularized case, which is more likely to offer better sparsity when 0 < p < 1. Thus, the formulation is a better approximation to the feature selection problem. An efficient algorithm is developed to solve the ℓ2,p -norm-based optimization problem and it is proved that the algorithm converges when 0 < p ≤ 2. Systematical experiments are conducted to understand the work of the proposed method. Promising experimental results on various types of real-world data sets demonstrate the effectiveness of our algorithm.
KW - Feature redundancy
KW - Feature selection
KW - linear discriminant analysis (LDA)
KW - ℓ-norm minimization
UR - http://www.scopus.com/inward/record.url?scp=84929591297&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2015.2424721
DO - 10.1109/TNNLS.2015.2424721
M3 - 文章
AN - SCOPUS:84929591297
SN - 2162-237X
VL - 27
SP - 796
EP - 808
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 4
M1 - 7108045
ER -