TY - JOUR
T1 - Contrastive Conditional Latent Diffusion for Audio-Visual Segmentation
AU - Mao, Yuxin
AU - Zhang, Jing
AU - Xiang, Mochu
AU - Lv, Yunqiu
AU - Li, Dong
AU - Zhong, Yiran
AU - Dai, Yuchao
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Audio-visual Segmentation (AVS) is conceptualized as a conditional generation task, where audio is considered as the conditional variable for segmenting the sound producer(s). In this case, audio should be extensively explored to maximize its contribution for the final segmentation task. We propose a contrastive conditional latent di usion model for audio-visual segmentation (AVS) to thoroughly investigate the impact of audio, where the correlation between audio and the final seg mentation map is modeled to guarantee the strong correlation between them. To achieve semantic-correlated representation learning, our framework incorporates a latent di usion model. The di usion model learns the conditional generation process of the ground-truth segmentation map, resulting in ground truth aware inference during the denoising process at the test stage. As our model is conditional, it is vital to ensure that the conditional variable contributes to the model output. We thus extensively model the contribution of the audio signal by minimizing the density ratio between the conditional probability of the multimodal data, e.g. conditioned on the audio-visual data, and that of the unimodal data, e.g. conditioned on the audio data only. In this way, our latent di usion model via density ratio optimization explicitly maximizes the contribution of audio for AVS, which can then be achieved with contrastive learning as a constraint, where the di usion part serves as the main objective to achieve maximum likelihood estimation, and the density ratio optimization part imposes the constraint. By adopting this latent di usion model via contrastive learning, we e ectively enhance the contribution of audio for AVS. The e ectiveness of our solution is validated through experimental results on the benchmark dataset.Audio-visual Segmentation (AVS) is conceptualized as a conditional generation task, where audio is considered as the conditional variable for segmenting the sound producer(s). In this case, audio should be extensively explored to maximize its contribution for the final segmentation task. We propose a contrastive conditional latent diffusion model for audio-visual segmentation (AVS) to thoroughly investigate the impact of audio, where the correlation between audio and the final segmentation map is modeled to guarantee the strong correlation between them. To achieve semantic-correlated representation learning, our framework incorporates a latent diffusion model. The diffusion model learns the conditional generation process of the ground-truth segmentation map, resulting in ground-truth aware inference during the denoising process at the test stage. As our model is conditional, it is vital to ensure that the conditional variable contributes to the model output. We thus extensively model the contribution of the audio signal by minimizing the density ratio between the conditional probability of the multimodal data, e.g. conditioned on the audio-visual data, and that of the unimodal data, e.g. conditioned on the audio data only. In this way, our latent diffusion model via density ratio optimization explicitly maximizes the contribution of audio for AVS, which can then be achieved with contrastive learning as a constraint, where the diffusion part serves as the main objective to achieve maximum likelihood estimation, and the density ratio optimization part imposes the constraint. By adopting this latent diffusion model via contrastive learning, we effectively enhance the contribution of audio for AVS. The effectiveness of our solution is validated through experimental results on the benchmark dataset.
AB - Audio-visual Segmentation (AVS) is conceptualized as a conditional generation task, where audio is considered as the conditional variable for segmenting the sound producer(s). In this case, audio should be extensively explored to maximize its contribution for the final segmentation task. We propose a contrastive conditional latent di usion model for audio-visual segmentation (AVS) to thoroughly investigate the impact of audio, where the correlation between audio and the final seg mentation map is modeled to guarantee the strong correlation between them. To achieve semantic-correlated representation learning, our framework incorporates a latent di usion model. The di usion model learns the conditional generation process of the ground-truth segmentation map, resulting in ground truth aware inference during the denoising process at the test stage. As our model is conditional, it is vital to ensure that the conditional variable contributes to the model output. We thus extensively model the contribution of the audio signal by minimizing the density ratio between the conditional probability of the multimodal data, e.g. conditioned on the audio-visual data, and that of the unimodal data, e.g. conditioned on the audio data only. In this way, our latent di usion model via density ratio optimization explicitly maximizes the contribution of audio for AVS, which can then be achieved with contrastive learning as a constraint, where the di usion part serves as the main objective to achieve maximum likelihood estimation, and the density ratio optimization part imposes the constraint. By adopting this latent di usion model via contrastive learning, we e ectively enhance the contribution of audio for AVS. The e ectiveness of our solution is validated through experimental results on the benchmark dataset.Audio-visual Segmentation (AVS) is conceptualized as a conditional generation task, where audio is considered as the conditional variable for segmenting the sound producer(s). In this case, audio should be extensively explored to maximize its contribution for the final segmentation task. We propose a contrastive conditional latent diffusion model for audio-visual segmentation (AVS) to thoroughly investigate the impact of audio, where the correlation between audio and the final segmentation map is modeled to guarantee the strong correlation between them. To achieve semantic-correlated representation learning, our framework incorporates a latent diffusion model. The diffusion model learns the conditional generation process of the ground-truth segmentation map, resulting in ground-truth aware inference during the denoising process at the test stage. As our model is conditional, it is vital to ensure that the conditional variable contributes to the model output. We thus extensively model the contribution of the audio signal by minimizing the density ratio between the conditional probability of the multimodal data, e.g. conditioned on the audio-visual data, and that of the unimodal data, e.g. conditioned on the audio data only. In this way, our latent diffusion model via density ratio optimization explicitly maximizes the contribution of audio for AVS, which can then be achieved with contrastive learning as a constraint, where the diffusion part serves as the main objective to achieve maximum likelihood estimation, and the density ratio optimization part imposes the constraint. By adopting this latent diffusion model via contrastive learning, we effectively enhance the contribution of audio for AVS. The effectiveness of our solution is validated through experimental results on the benchmark dataset.
KW - Audio-visual segmentation
KW - conditional latent diffusion model
KW - contrastive learning
UR - http://www.scopus.com/inward/record.url?scp=105009380053&partnerID=8YFLogxK
U2 - 10.1109/TIP.2025.3580269
DO - 10.1109/TIP.2025.3580269
M3 - 文章
AN - SCOPUS:105009380053
SN - 1057-7149
VL - 34
SP - 4108
EP - 4119
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -