摘要
Underwater acoustic target recognition (UATR) is typically challenging due to the complex underwater environment and poor prior knowledge. Deep learning (DL)-based UATR methods have demonstrated their effectiveness by extracting more discriminative features on time–frequency (T–F) spectrograms. However, the existing methods exhibit the lack of robustness and ability to capture the time–frequency correlation inherent in the T–F representation. To this end, we first introduce the Wavelet Scattering Transform (WST) to obtain the T–F scattering coefficients of underwater acoustic signals. Then, we treat the scattering coefficients as multivariate time-series data and design a new Two-Stream Time–Frequency (newTSTF) transformer. This model can simultaneously extract temporal and frequency-related features from the scattering coefficients, enhancing accuracy. Specifically, we introduce the Non-stationary encoder to recover the temporal features lost during normalization. Experimental results on real-world data demonstrate that our model achieves high accuracy in UATR.
源语言 | 英语 |
---|---|
文章编号 | 109891 |
期刊 | Signal Processing |
卷 | 231 |
DOI | |
出版状态 | 已出版 - 6月 2025 |