TY - JOUR
T1 - Insights Into the Robustness of Minimum Error Entropy Estimation
AU - Chen, Badong
AU - Xing, Lei
AU - Xu, Bin
AU - Zhao, Haiquan
AU - Príncipe, José C.
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2018/3
Y1 - 2018/3
N2 - The minimum error entropy (MEE) is an important and highly effective optimization criterion in information theoretic learning (ITL). For regression problems, MEE aims at minimizing the entropy of the prediction error such that the estimated model preserves the information of the data generating system as much as possible. In many real world applications, the MEE estimator can outperform significantly the well-known minimum mean square error (MMSE) estimator and show strong robustness to noises especially when data are contaminated by non-Gaussian (multimodal, heavy tailed, discrete valued, and so on) noises. In this brief, we present some theoretical results on the robustness of MEE. For a one-parameter linear errors-in-variables (EIV) model and under some conditions, we derive a region that contains the MEE solution, which suggests that the MEE estimate can be very close to the true value of the unknown parameter even in presence of arbitrarily large outliers in both input and output variables. Theoretical prediction is verified by an illustrative example.
AB - The minimum error entropy (MEE) is an important and highly effective optimization criterion in information theoretic learning (ITL). For regression problems, MEE aims at minimizing the entropy of the prediction error such that the estimated model preserves the information of the data generating system as much as possible. In many real world applications, the MEE estimator can outperform significantly the well-known minimum mean square error (MMSE) estimator and show strong robustness to noises especially when data are contaminated by non-Gaussian (multimodal, heavy tailed, discrete valued, and so on) noises. In this brief, we present some theoretical results on the robustness of MEE. For a one-parameter linear errors-in-variables (EIV) model and under some conditions, we derive a region that contains the MEE solution, which suggests that the MEE estimate can be very close to the true value of the unknown parameter even in presence of arbitrarily large outliers in both input and output variables. Theoretical prediction is verified by an illustrative example.
KW - Estimation
KW - minimum error entropy (MEE)
KW - robustness
UR - http://www.scopus.com/inward/record.url?scp=85007348370&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2016.2636160
DO - 10.1109/TNNLS.2016.2636160
M3 - 文章
C2 - 28026787
AN - SCOPUS:85007348370
SN - 2162-237X
VL - 29
SP - 731
EP - 737
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 3
ER -