A Survey of Multimodal Learning: Methods, Applications, and Future

Yuan Yuan, Zhaojian Li, Bin Zhao

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The multimodal interplay of the five fundamental senses-Sight, Hearing, Smell, Taste, and Touch-provides humans with superior environmental perception and learning skills. Adapted from the human perceptual system, multimodal machine learning tries to incorporate different forms of input, such as image, audio, and text, and determine their fundamental connections through joint modeling. As one of the future development forms of artificial intelligence, it is necessary to summarize the progress of multimodal machine learning. In this article, we start with the form of a multimodal combination and provide a comprehensive survey of the emerging subject of multimodal machine learning, covering representative research approaches, the most recent advancements, and their applications. Specifically, this article analyzes the relationship between different modalities in detail and sorts out the key issues in multimodal research from the application scenarios. Besides, we thoroughly reviewed state-of-The-Art methods and datasets covered in multimodal learning research. We then identify the substantial challenges and potential developing directions in this field. Finally, given the comprehensive nature of this survey, both modality-specific and task-specific researchers can benefit from this survey and advance the field.

Original languageEnglish
Article number167
JournalACM Computing Surveys
Volume57
Issue number7
DOIs
StatePublished - 20 Feb 2025

Keywords

  • Multimodal
  • audio-visual learning
  • cross-modal
  • depth-visual
  • text-visual
  • touch-visual

Fingerprint

Dive into the research topics of 'A Survey of Multimodal Learning: Methods, Applications, and Future'. Together they form a unique fingerprint.

Cite this