Learning causal representations based on a GAE embedded autoencoder

Kuang Zhou, Ming Jiang, Bogdan Gabrys, Yong Xu

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

Traditional machine-learning approaches face limitations when confronted with insufficient data. Transfer learning addresses this by leveraging knowledge from closely related domains. The key in transfer learning is to find a transferable feature representation to enhance cross-domain classification models. However, in some scenarios, some features correlated with samples in the source domain may not be relevant to those in the target. Causal inference enables us to uncover the underlying patterns and mechanisms within the data, mitigating the impact of confounding factors. Nevertheless, most existing causal inference algorithms have limitations when applied to high-dimensional datasets with nonlinear causal relationships. In this work, a new causal representation method based on a Graph autoencoder embedded AutoEncoder, named GeAE, is introduced to learn invariant representations across domains. The proposed approach employs a causal structure learning module, similar to a graph autoencoder, to account for nonlinear causal relationships present in the data. Moreover, the cross-entropy loss as well as the causal structure learning loss and the reconstruction loss are incorporated in the objective function designed in a united autoencoder. This method allows for the handling of high-dimensional data and can provide effective representations for cross-domain classification tasks. Experimental results on generated and real-world datasets demonstrate the effectiveness of GeAE compared with the state-of-the-art methods.

指纹

探究 'Learning causal representations based on a GAE embedded autoencoder' 的科研主题。它们共同构成独一无二的指纹。

引用此