TY - JOUR
T1 - Enhancing Low-Rank Adaptation with Recoverability-Based Reinforcement Pruning for Object Counting
AU - Guo, Haojie
AU - Gao, Junyu
AU - Yuan, Yuan
N1 - Publisher Copyright:
© 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2025/4/11
Y1 - 2025/4/11
N2 - Object counting is crucial for understanding the distribution of objects in different scenarios. Recently, many object counting networks have been designed to be more complex to achieve marginal improvements, leading to excessive time spent on model design. With the development of large models (LMs), various visual tasks can be accomplished by transferring pre-trained weights from LMs and fine-tuning them. However, tens of millions of training data make the pre-training parameters of LMs not entirely necessary. Moreover, if unnecessary parameters in the large model are not removed, it may lead to decreased performance on the tasks to be transferred. Motivated by this, this paper proposes an Enhancing low-Rank adaptation with Recoverability-based Reinforcement Pruning (E3RP) method to balance the complexity of large model and the accuracy of counting tasks. Firstly, we design a new reward mechanism based on the feature similarity of large model before and after globally unstructured pruning of specific parameters. Additionally, we design a Patch Query Flip Attention (PQFA) mechanism to align multi-scale features through bidirectional interaction of features. Finally, the parameters of large model are pruned utilizing the pruning rate autonomously determined by the reinforcement learning network, and the large model is fine-tuned to counting tasks by a simple decoding head. Extensive experiments on four cross-scenario datasets demonstrate that the proposed method can remove redundant network parameters while ensuring network performance, with a maximum reduction of up to 63%.
AB - Object counting is crucial for understanding the distribution of objects in different scenarios. Recently, many object counting networks have been designed to be more complex to achieve marginal improvements, leading to excessive time spent on model design. With the development of large models (LMs), various visual tasks can be accomplished by transferring pre-trained weights from LMs and fine-tuning them. However, tens of millions of training data make the pre-training parameters of LMs not entirely necessary. Moreover, if unnecessary parameters in the large model are not removed, it may lead to decreased performance on the tasks to be transferred. Motivated by this, this paper proposes an Enhancing low-Rank adaptation with Recoverability-based Reinforcement Pruning (E3RP) method to balance the complexity of large model and the accuracy of counting tasks. Firstly, we design a new reward mechanism based on the feature similarity of large model before and after globally unstructured pruning of specific parameters. Additionally, we design a Patch Query Flip Attention (PQFA) mechanism to align multi-scale features through bidirectional interaction of features. Finally, the parameters of large model are pruned utilizing the pruning rate autonomously determined by the reinforcement learning network, and the large model is fine-tuned to counting tasks by a simple decoding head. Extensive experiments on four cross-scenario datasets demonstrate that the proposed method can remove redundant network parameters while ensuring network performance, with a maximum reduction of up to 63%.
UR - http://www.scopus.com/inward/record.url?scp=105004004645&partnerID=8YFLogxK
U2 - 10.1609/aaai.v39i3.32334
DO - 10.1609/aaai.v39i3.32334
M3 - 会议文章
AN - SCOPUS:105004004645
SN - 2159-5399
VL - 39
SP - 3238
EP - 3246
JO - Proceedings of the AAAI Conference on Artificial Intelligence
JF - Proceedings of the AAAI Conference on Artificial Intelligence
IS - 3
T2 - 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Y2 - 25 February 2025 through 4 March 2025
ER -