摘要
Due to the weak and nonstationary properties, electroencephalogram (EEG) data present significant individual differences. To align data distributions of different subjects, transfer learning showed promising performance in cross-subject EEG emotion recognition. However, most of the existing models sequentially learned the domain-invariant features and estimated the target domain label information. Such a two-stage strategy breaks the inner connections of both processes, inevitably causing the suboptimality. In this article, we propose a joint EEG feature transfer and semisupervised cross-subject emotion recognition model in which the shared subspace projection matrix and target label are jointly optimized toward the optimum. Extensive experiments are conducted on SEED-IV and SEED, and the results show that the emotion recognition performance is significantly enhanced by the joint learning mode and the spatial-frequency activation patterns of critical EEG frequency bands and brain regions in cross-subject emotion expression are quantitatively identified by analyzing the learned shared subspace.
源语言 | 英语 |
---|---|
页(从-至) | 8104-8115 |
页数 | 12 |
期刊 | IEEE Transactions on Industrial Informatics |
卷 | 19 |
期 | 7 |
DOI | |
出版状态 | 已出版 - 1 7月 2023 |