TY - JOUR
T1 - Accelerating Flexible Manifold Embedding for Scalable Semi-Supervised Learning
AU - Qiu, Suo
AU - Nie, Feiping
AU - Xu, Xiangmin
AU - Qing, Chunmei
AU - Xu, Dong
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2019/9
Y1 - 2019/9
N2 - In this paper, we address the problem of large-scale graph-based semi-supervised learning for multi-class classification. Most existing scalable graph-based semi-supervised learning methods are based on the hard linear constraint or cannot cope with the unseen samples, which limits their applications and learning performance. To this end, we build upon our previous work flexible manifold embedding (FME) [1] and propose two novel linear-complexity algorithms called fast flexible manifold embedding (f-FME) and reduced flexible manifold embedding (r-FME). Both of the proposed methods accelerate FME and inherit its advantages. Specifically, our methods address the hard linear constraint problem by combining a regression residue term and a manifold smoothness term jointly, which naturally provides the prediction model for handling unseen samples. To reduce computational costs, we exploit the underlying relationship between a small number of anchor points and all data points to construct the graph adjacency matrix, which leads to simplified closed-form solutions. The resultant f-FME and r-FME algorithms not only scale linearly in both time and space with respect to the number of training samples but also can effectively utilize information from both labeled and unlabeled data. Experimental results show the effectiveness and scalability of the proposed methods.
AB - In this paper, we address the problem of large-scale graph-based semi-supervised learning for multi-class classification. Most existing scalable graph-based semi-supervised learning methods are based on the hard linear constraint or cannot cope with the unseen samples, which limits their applications and learning performance. To this end, we build upon our previous work flexible manifold embedding (FME) [1] and propose two novel linear-complexity algorithms called fast flexible manifold embedding (f-FME) and reduced flexible manifold embedding (r-FME). Both of the proposed methods accelerate FME and inherit its advantages. Specifically, our methods address the hard linear constraint problem by combining a regression residue term and a manifold smoothness term jointly, which naturally provides the prediction model for handling unseen samples. To reduce computational costs, we exploit the underlying relationship between a small number of anchor points and all data points to construct the graph adjacency matrix, which leads to simplified closed-form solutions. The resultant f-FME and r-FME algorithms not only scale linearly in both time and space with respect to the number of training samples but also can effectively utilize information from both labeled and unlabeled data. Experimental results show the effectiveness and scalability of the proposed methods.
KW - large-scale machine learning
KW - manifold embedding
KW - Semi-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85053288052&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2018.2869875
DO - 10.1109/TCSVT.2018.2869875
M3 - 文章
AN - SCOPUS:85053288052
SN - 1051-8215
VL - 29
SP - 2786
EP - 2795
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 9
M1 - 8463514
ER -