TY - JOUR
T1 - Human-Centric Behavior Description in Videos
T2 - New Benchmark and Model
AU - Zhou, Lingru
AU - Gao, Yiqi
AU - Zhang, Manqing
AU - Wu, Peng
AU - Wang, Peng
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
PY - 2024
Y1 - 2024
N2 - In the domain of video surveillance, describing the behavior of each individual within the video is becoming increasingly essential, especially in complex scenarios with multiple individuals present. This is because describing each individual’s behavior provides more detailed situational analysis, enabling accurate assessment and response to potential risks, ensuring the safety and harmony of public places. Currently, video-level captioning datasets cannot provide fine-grained descriptions for each individual’s specific behavior. However, mere descriptions at the video-level fail to provide an in-depth interpretation of individual behaviors, making it challenging to accurately determine the specific identity of each individual. To address this challenge, we construct a human-centric video surveillance captioning dataset, which provides detailed descriptions of the dynamic behaviors of 7,820 individuals. Specifically, we have labeled several aspects of each person, such as location, clothing, and interactions with other elements in the scene, and these people are distributed across 1,012 videos. Based on this dataset, we can link individuals to their respective behaviors, allowing for further analysis of each person’s behavior in surveillance videos. Besides the dataset, we propose a novel video captioning approach that can describe individual behavior in detail on a person-level basis, achieving state-of-the-art results.
AB - In the domain of video surveillance, describing the behavior of each individual within the video is becoming increasingly essential, especially in complex scenarios with multiple individuals present. This is because describing each individual’s behavior provides more detailed situational analysis, enabling accurate assessment and response to potential risks, ensuring the safety and harmony of public places. Currently, video-level captioning datasets cannot provide fine-grained descriptions for each individual’s specific behavior. However, mere descriptions at the video-level fail to provide an in-depth interpretation of individual behaviors, making it challenging to accurately determine the specific identity of each individual. To address this challenge, we construct a human-centric video surveillance captioning dataset, which provides detailed descriptions of the dynamic behaviors of 7,820 individuals. Specifically, we have labeled several aspects of each person, such as location, clothing, and interactions with other elements in the scene, and these people are distributed across 1,012 videos. Based on this dataset, we can link individuals to their respective behaviors, allowing for further analysis of each person’s behavior in surveillance videos. Besides the dataset, we propose a novel video captioning approach that can describe individual behavior in detail on a person-level basis, achieving state-of-the-art results.
KW - Behavior description
KW - Deformable transformer
KW - Human-centric caption
KW - Video anomaly detection
UR - http://www.scopus.com/inward/record.url?scp=85197502075&partnerID=8YFLogxK
U2 - 10.1109/TMM.2024.3414263
DO - 10.1109/TMM.2024.3414263
M3 - 文章
AN - SCOPUS:85197502075
SN - 1520-9210
VL - 26
SP - 10867
EP - 10878
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
ER -