TY - GEN
T1 - Spatial Global Context Attention for Convolutional Neural Networks
T2 - 14th IEEE International Conference on Signal Processing, Communications and Computing, ICSPCC 2024
AU - Yu, Yang
AU - Zhang, Yi
AU - Zhu, Xingxing
AU - Cheng, Zeyu
AU - Song, Zhe
AU - Tang, Chengkai
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Capturing global contextual information within an image can significantly enhance visual understanding. However, current attention methods model long-range dependencies between features by aggregating query-specific global context to each query position. These methods are inefficient and consume a huge amount of memory and computational resources, making them less practical. To address this issue, we propose a simple, low-cost, and high-performance Spatial Global Context Attention (SGCA) module. This module aggregates query-independent global context to update features at each query position, capturing spatial global contextual information in an efficient and effective manner, significantly improving feature representations, which contributes to more precise classification results. The proposed SGCA is lightweight and flexible, making it suitable as an independent add-on component that can be applied to various convolutional neural networks (CNNs) to create a family of new architectures named SGCANet. Without bells and whistles, extensive experimental results on CIFAR-100 and ImageNet-1K for image recognition tasks demonstrate that our method significantly outperforms other counterparts in classification performance at a cheaper cost, achieving leading results.
AB - Capturing global contextual information within an image can significantly enhance visual understanding. However, current attention methods model long-range dependencies between features by aggregating query-specific global context to each query position. These methods are inefficient and consume a huge amount of memory and computational resources, making them less practical. To address this issue, we propose a simple, low-cost, and high-performance Spatial Global Context Attention (SGCA) module. This module aggregates query-independent global context to update features at each query position, capturing spatial global contextual information in an efficient and effective manner, significantly improving feature representations, which contributes to more precise classification results. The proposed SGCA is lightweight and flexible, making it suitable as an independent add-on component that can be applied to various convolutional neural networks (CNNs) to create a family of new architectures named SGCANet. Without bells and whistles, extensive experimental results on CIFAR-100 and ImageNet-1K for image recognition tasks demonstrate that our method significantly outperforms other counterparts in classification performance at a cheaper cost, achieving leading results.
KW - global contextual information
KW - image recognition
KW - long-range dependencies
KW - spatial global context attention
UR - http://www.scopus.com/inward/record.url?scp=85214919971&partnerID=8YFLogxK
U2 - 10.1109/ICSPCC62635.2024.10770518
DO - 10.1109/ICSPCC62635.2024.10770518
M3 - 会议稿件
AN - SCOPUS:85214919971
T3 - 2024 IEEE International Conference on Signal Processing, Communications and Computing, ICSPCC 2024
BT - 2024 IEEE International Conference on Signal Processing, Communications and Computing, ICSPCC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 19 August 2024 through 22 August 2024
ER -