ClST: A Convolutional Transformer Framework for Automatic Modulation Recognition by Knowledge Distillation

Dongbin Hou, Lixin Li, Wensheng Lin, Junli Liang, Zhu Han

科研成果: 期刊稿件文章同行评审

8 引用 (Scopus)

摘要

With the rapid development of deep learning (DL) in recent years, automatic modulation recognition (AMR) with DL has achieved high accuracy. However, insufficient training signal data in complicated channel environments and large-scale DL models are critical factors that make DL methods difficult to deploy in practice. Aiming to these problems, we propose a novel neural network named convolution-linked signal transformer (ClST) and a novel knowledge distillation method named signal knowledge distillation (SKD). The ClST is accomplished through three primary modifications: a hierarchy of transformer containing convolution, a novel attention mechanism named parallel spatial-channel attention (PSCA) mechanism and a novel convolutional transformer block named convolution-transformer projection (CTP) to leverage a convolutional projection. The SKD is a knowledge distillation method to effectively reduce the parameters and complexity of neural networks. We train two lightweight neural networks using the SKD algorithm, KD-CNN and KD-MobileNet, to meet the demand that neural networks can be used on miniaturized devices. The simulation results demonstrate that the ClST outperforms advanced neural networks on all datasets. Moreover, both KD-CNN and KD-MobileNet obtain higher recognition accuracy with less network complexity, which is very beneficial for the deployment of AMR on miniaturized communication devices.

源语言英语
页(从-至)8013-8028
页数16
期刊IEEE Transactions on Wireless Communications
23
7
DOI
出版状态已出版 - 2024

指纹

探究 'ClST: A Convolutional Transformer Framework for Automatic Modulation Recognition by Knowledge Distillation' 的科研主题。它们共同构成独一无二的指纹。

引用此