TY - GEN
T1 - Video Frame Prediction from a Single Image and Events
AU - Zhu, Juanjuan
AU - Wan, Zhexiong
AU - Dai, Yuchao
N1 - Publisher Copyright:
Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org).All rights reserved.
PY - 2024/3/25
Y1 - 2024/3/25
N2 - Recently, the task of Video Frame Prediction (VFP), which predicts future video frames from previous ones through extrapolation, has made remarkable progress.However, the performance of existing VFP methods is still far from satisfactory due to the fixed framerate video used: 1) they have difficulties in handling complex dynamic scenes; 2) they cannot predict future frames with flexible prediction time intervals.The event cameras can record the intensity changes asynchronously with a very high temporal resolution, which provides rich dynamic information about the observed scenes.In this paper, we propose to predict video frames from a single image and the following events, which can not only handle complex dynamic scenes but also predict future frames with flexible prediction time intervals.First, we introduce a symmetrical cross-modal attention augmentation module to enhance the complementary information between images and events.Second, we propose to jointly achieve optical flow estimation and frame generation by combining the motion information of events and the semantic information of the image, then inpainting the holes produced by forward warping to obtain an ideal prediction frame.Based on these, we propose a lightweight pyramidal coarse-to-fine model that can predict a 720P frame within 25 ms.Extensive experiments show that our proposed model significantly outperforms the state-of-the-art frame-based and event-based VFP methods and has the fastest runtime.Code is available at https://npucvr.github.io/VFPSIE/.
AB - Recently, the task of Video Frame Prediction (VFP), which predicts future video frames from previous ones through extrapolation, has made remarkable progress.However, the performance of existing VFP methods is still far from satisfactory due to the fixed framerate video used: 1) they have difficulties in handling complex dynamic scenes; 2) they cannot predict future frames with flexible prediction time intervals.The event cameras can record the intensity changes asynchronously with a very high temporal resolution, which provides rich dynamic information about the observed scenes.In this paper, we propose to predict video frames from a single image and the following events, which can not only handle complex dynamic scenes but also predict future frames with flexible prediction time intervals.First, we introduce a symmetrical cross-modal attention augmentation module to enhance the complementary information between images and events.Second, we propose to jointly achieve optical flow estimation and frame generation by combining the motion information of events and the semantic information of the image, then inpainting the holes produced by forward warping to obtain an ideal prediction frame.Based on these, we propose a lightweight pyramidal coarse-to-fine model that can predict a 720P frame within 25 ms.Extensive experiments show that our proposed model significantly outperforms the state-of-the-art frame-based and event-based VFP methods and has the fastest runtime.Code is available at https://npucvr.github.io/VFPSIE/.
UR - http://www.scopus.com/inward/record.url?scp=85189502955&partnerID=8YFLogxK
U2 - 10.1609/aaai.v38i7.28609
DO - 10.1609/aaai.v38i7.28609
M3 - 会议稿件
AN - SCOPUS:85189502955
T3 - Proceedings of the AAAI Conference on Artificial Intelligence
SP - 7748
EP - 7756
BT - Technical Tracks 14
A2 - Wooldridge, Michael
A2 - Dy, Jennifer
A2 - Natarajan, Sriraam
PB - Association for the Advancement of Artificial Intelligence
T2 - 38th AAAI Conference on Artificial Intelligence, AAAI 2024
Y2 - 20 February 2024 through 27 February 2024
ER -