TY - GEN
T1 - Two dimensional large margin nearest neighbor for matrix classification
AU - Song, Kun
AU - Nie, Feiping
AU - Han, Junwei
PY - 2017
Y1 - 2017
N2 - Matrices are a common form of data encountered in a wide range of real applications. How to efficiently classify this kind of data is an important research topic. In this paper, we propose a novel distance metric learning method named two dimensional large margin nearest neighbor (2DLMNN), for improving the performance of κ-nearest neighbor (KNN) classifier in matrix classification. Different from traditional metric learning algorithms, our method employs a left projection matrix U and a right projection matrix V to define the matrixbased Mahalanobis distance, for constructing the objective aimed at separating points in different classes by a large margin. Since the parameters in those two projection matrices are much less than that in its vector-based counterpart, 2DLMNN can reduce computational complexity and the risks of overfitting. We also introduce a framework for solving the proposed 2DLMNN. The convergence behavior, computational complexity are also analyzed. At last, promising experimental results on several data sets are provided to show the effectiveness of our method.
AB - Matrices are a common form of data encountered in a wide range of real applications. How to efficiently classify this kind of data is an important research topic. In this paper, we propose a novel distance metric learning method named two dimensional large margin nearest neighbor (2DLMNN), for improving the performance of κ-nearest neighbor (KNN) classifier in matrix classification. Different from traditional metric learning algorithms, our method employs a left projection matrix U and a right projection matrix V to define the matrixbased Mahalanobis distance, for constructing the objective aimed at separating points in different classes by a large margin. Since the parameters in those two projection matrices are much less than that in its vector-based counterpart, 2DLMNN can reduce computational complexity and the risks of overfitting. We also introduce a framework for solving the proposed 2DLMNN. The convergence behavior, computational complexity are also analyzed. At last, promising experimental results on several data sets are provided to show the effectiveness of our method.
UR - http://www.scopus.com/inward/record.url?scp=85031926777&partnerID=8YFLogxK
U2 - 10.24963/ijcai.2017/383
DO - 10.24963/ijcai.2017/383
M3 - 会议稿件
AN - SCOPUS:85031926777
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 2751
EP - 2757
BT - 26th International Joint Conference on Artificial Intelligence, IJCAI 2017
A2 - Sierra, Carles
PB - International Joint Conferences on Artificial Intelligence
T2 - 26th International Joint Conference on Artificial Intelligence, IJCAI 2017
Y2 - 19 August 2017 through 25 August 2017
ER -