Gestalt Principles Emerge When Learning Universal Sound Source Separation

Han Li, Kean Chen, Bernhard U. Seeber

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Sound source separation is an essential aspect in auditory scene analysis, which is still an urgent challenge for machine hearing. In this paper, a fully convolutional time-domain audio separation network (ConvTasNet) is trained for universal two-source separation, consisting of speech, environmental sounds, and music. Besides the separation performance of the network, the underlying separation mechanisms are our main concern. Through a series of classic auditory segregation experiments, we systematically explore the principles learned by the network for simultaneous and sequential organization. The results show that without prior knowledge of auditory scene analysis imparted on the network, it spontaneously learns the separation mechanisms from raw waveforms that are similar to those which have developed over many years in humans. The Gestalt principles for separation in the human auditory system are shown to be effective in our network: harmonicity, onset synchrony and common fate (coherent modulation in amplitude and frequency), proximity, continuity, similarity. The universal sound source separation network following Gestalt principles is not limited to specific sources and can be applied to various acoustic situations like human hearing, providing new directions for solving the problem of auditory scene analysis.

Original languageEnglish
Pages (from-to)1877-1891
Number of pages15
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume30
DOIs
StatePublished - 2022

Keywords

  • Gestalt principles
  • separation mechanisms
  • universal source separation

Fingerprint

Dive into the research topics of 'Gestalt Principles Emerge When Learning Universal Sound Source Separation'. Together they form a unique fingerprint.

Cite this