TY - JOUR
T1 - Robust Supervised and Semisupervised Least Squares Regression Using ℓ2,p-Norm Minimization
AU - Wang, Jingyu
AU - Xie, Fangyuan
AU - Nie, Feiping
AU - Li, Xuelong
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2023/11/1
Y1 - 2023/11/1
N2 - Least squares regression (LSR) is widely applied in statistics theory due to its theoretical solution, which can be used in supervised, semisupervised, and multiclass learning. However, LSR begins to fail and its discriminative ability cannot be guaranteed when the original data have been corrupted and noised. In reality, the noises are unavoidable and could greatly affect the error construction in LSR. To cope with this problem, a robust supervised LSR (RSLSR) is proposed to eliminate the effect of noises and outliers. The loss function adopts $\ell _{2,p}$ -norm ( $0< p\leq 2$ ) instead of square loss. In addition, the probability weight is added to each sample to determine whether the sample is a normal point or not. Its physical meaning is very clear, in which if the point is normal, the probability value is 1; otherwise, the weight is 0. To effectively solve the concave problem, an iterative algorithm is introduced, in which additional weights are added to penalize normal samples with large errors. We also extend RSLSR to robust semisupervised LSR (RSSLSR) to fully utilize the limited labeled samples. A large number of classification performances on corrupted data illustrate the robustness of the proposed methods.
AB - Least squares regression (LSR) is widely applied in statistics theory due to its theoretical solution, which can be used in supervised, semisupervised, and multiclass learning. However, LSR begins to fail and its discriminative ability cannot be guaranteed when the original data have been corrupted and noised. In reality, the noises are unavoidable and could greatly affect the error construction in LSR. To cope with this problem, a robust supervised LSR (RSLSR) is proposed to eliminate the effect of noises and outliers. The loss function adopts $\ell _{2,p}$ -norm ( $0< p\leq 2$ ) instead of square loss. In addition, the probability weight is added to each sample to determine whether the sample is a normal point or not. Its physical meaning is very clear, in which if the point is normal, the probability value is 1; otherwise, the weight is 0. To effectively solve the concave problem, an iterative algorithm is introduced, in which additional weights are added to penalize normal samples with large errors. We also extend RSLSR to robust semisupervised LSR (RSSLSR) to fully utilize the limited labeled samples. A large number of classification performances on corrupted data illustrate the robustness of the proposed methods.
KW - least squares regression (LSR)
KW - robust
KW - supervised and semisupervised classification
KW - ℓ-norm
UR - http://www.scopus.com/inward/record.url?scp=85175357055&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2022.3150102
DO - 10.1109/TNNLS.2022.3150102
M3 - 文章
C2 - 35196246
AN - SCOPUS:85175357055
SN - 2162-237X
VL - 34
SP - 8389
EP - 8403
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 11
ER -