Spatiotemporal fusion personality prediction based on visual information

Jia Xu, Weijian Tian, Guoyun Lv, Yangyu Fan

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The previous studies have demonstrated that the use of deep learning algorithms can make personality prediction based on two-dimensional image information, and the emergence of video provides more possibilities for exploring personality prediction. Compared to image-based personality prediction, using video can provide more information than static images. But videos contain hundreds of frames, not all of which are useful, and processing these images requires a lot of computation. This paper proposes to apply video analysis algorithms to the task of personality prediction and propose the use of LSTM to fuse image feature information. The best prediction effect is confirmed by experiments when the fusion frame number is 16 frames. This paper is based on 3D-ConvNet to build an end-to-end video analysis network and solve the network over fitting problem by pre-training and data augmentation. Experiments show that the accuracy of character prediction can be improved by using 3D-ConvNet to fuse the spatio-temporal information of videos.

Original languageEnglish
Pages (from-to)44227-44244
Number of pages18
JournalMultimedia Tools and Applications
Volume82
Issue number28
DOIs
StatePublished - Nov 2023

Keywords

  • Personality Prediction
  • Spatiotemporal Fusion
  • Visual Information

Fingerprint

Dive into the research topics of 'Spatiotemporal fusion personality prediction based on visual information'. Together they form a unique fingerprint.

Cite this