APGVAE: Adaptive disentangled representation learning with the graph-based structure information

Qiao Ke, Xinhui Jing, Marcin Woźniak, Shuang Xu, Yunji Liang, Jiangbin Zheng

科研成果: 期刊稿件文章同行评审

40 引用 (Scopus)

摘要

Neural networks are used to learn task-oriented high-level representations in an end-to-end manner by building a multi-layer neural network. Generation models have developed rapidly with the emergence of deep neural networks. But it still has problems with the insufficient authenticity of generated images, the deficiency of diversity, consistency, and unexplainability in the generation process. Disentangled representation is an effective method to learn a high-level feature representation and realize the interpretability of deep neural networks. We propose a general disentangled representation learning network with variational autoencoder network as the basic framework for the image generation process. The graph-based structure of the priors is embedded in the last module of the deep encoder network to build the feature spaces by the class, task-oriented, and task-unrelated information respectively. Meanwhile the priors should be adaptively modified with the task relevance of a generated image. And the semi-supervised learning is further involved in the disentangled representation network framework to reduce the requirements of label and extend the majority of feature space under the task-unrelated feature assumption. Experimental results show that the proposed method is efficient for various types of images and has a good potential for further research and development.

源语言英语
文章编号119903
期刊Information Sciences
657
DOI
出版状态已出版 - 2月 2024

指纹

探究 'APGVAE: Adaptive disentangled representation learning with the graph-based structure information' 的科研主题。它们共同构成独一无二的指纹。

引用此