TY - GEN
T1 - VHLformer
T2 - 2025 International Conference on Cyber-Physical Social Intelligence, CPSI 2025
AU - Feng, Qi
AU - Wen, Yongming
AU - Xiong, Chuyi
AU - Li, Bo
AU - Gao, Xiaoguang
AU - Wan, Kaifang
AU - Wang, Cong
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - As an essential component of intelligent transportation, traffic flow prediction is crucial in decision-making and traffic system optimization. However, traffic flow prediction is a challenging task. Traffic flow is essentially a type of time series with distinct temporal characteristics. Furthermore, due to the mutual influence between traffic nodes, there are complex dependencies in the flow sequences of each node, which contribute to the complexity of traffic flow prediction. This paper primarily analyzes the temporal dimension of traffic flow, which can be decomposed into different frequencies. Existing models often fail to handle specific frequency bands, leading to bottlenecks in prediction performance. In recent years, Transformers have proven to possess powerful sequence modelling capabilities. However, the self-attention mechanism is insensitive to high-frequency information, and the loss of such information results in decreased prediction performance. To address the aforementioned issues, we proposed decomposing the multi-head attention mechanism, with different attention heads handling information of different frequencies in traffic flow, and named it VHL-Attention. Based on VHL-Attention, we established VHLformer. Experiments on real-world datasets demonstrate that VHLformer achieves the state-of-the-art performance.
AB - As an essential component of intelligent transportation, traffic flow prediction is crucial in decision-making and traffic system optimization. However, traffic flow prediction is a challenging task. Traffic flow is essentially a type of time series with distinct temporal characteristics. Furthermore, due to the mutual influence between traffic nodes, there are complex dependencies in the flow sequences of each node, which contribute to the complexity of traffic flow prediction. This paper primarily analyzes the temporal dimension of traffic flow, which can be decomposed into different frequencies. Existing models often fail to handle specific frequency bands, leading to bottlenecks in prediction performance. In recent years, Transformers have proven to possess powerful sequence modelling capabilities. However, the self-attention mechanism is insensitive to high-frequency information, and the loss of such information results in decreased prediction performance. To address the aforementioned issues, we proposed decomposing the multi-head attention mechanism, with different attention heads handling information of different frequencies in traffic flow, and named it VHL-Attention. Based on VHL-Attention, we established VHLformer. Experiments on real-world datasets demonstrate that VHLformer achieves the state-of-the-art performance.
KW - Attention Mechanism
KW - Frequency decompose
KW - Traffic flow prediction
KW - Transformer
UR - https://www.scopus.com/pages/publications/105033352139
U2 - 10.1109/CPSI66656.2025.11343907
DO - 10.1109/CPSI66656.2025.11343907
M3 - 会议稿件
AN - SCOPUS:105033352139
T3 - 2025 International Conference on Cyber-Physical Social Intelligence, CPSI 2025
BT - 2025 International Conference on Cyber-Physical Social Intelligence, CPSI 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 7 November 2025 through 10 November 2025
ER -