TY - JOUR
T1 - BCINetV1
T2 - Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding
AU - Aziz, Muhammad Zulkifal
AU - Yu, Xiaojun
AU - Guo, Xinran
AU - He, Xinming
AU - Huang, Binwen
AU - Fan, Zeming
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/8
Y1 - 2025/8
N2 - Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications.
AB - Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications.
KW - biomedical signal processing
KW - computer-aided diagnosis
KW - electroencephalography (EEG)
KW - motor imagery
UR - https://www.scopus.com/pages/publications/105013201303
U2 - 10.3390/s25154657
DO - 10.3390/s25154657
M3 - 文章
C2 - 40807821
AN - SCOPUS:105013201303
SN - 1424-8220
VL - 25
JO - Sensors
JF - Sensors
IS - 15
M1 - 4657
ER -