Skip to main navigation Skip to search Skip to main content

X-Fake: Juggling Utility Evaluation and Explanation of Simulated SAR Images

  • Northwestern Polytechnical University Xian
  • Fudan University
  • Chongqing University of Posts and Telecommunications

Research output: Contribution to journalArticlepeer-review

Abstract

Synthetic aperture radar (SAR) image simulation has attracted much attention due to its great potential to supplement the scarce training data for deep learning algorithms. Consequently, evaluating the quality of the simulated SAR image is crucial for practical applications. The current literature primarily uses image quality assessment (IQA) techniques for evaluation that rely on human observers’ perceptions. However, because of the unique imaging mechanism of SAR, these techniques may produce evaluation results that are not entirely valid. The distribution inconsistency between real and simulated data is the main obstacle that influences the utility of simulated SAR images. To this end, we propose a novel trustworthy utility evaluation framework with a counterfactual explanation for simulated SAR images for the first time, denoted as X-Fake. It unifies a probabilistic evaluator and a causal explainer to achieve a trustworthy utility assessment. We construct the evaluator using a probabilistic Bayesian deep model to learn the posterior distribution, conditioned on real data. Quantitatively, the predicted uncertainty of simulated data can reflect the distribution discrepancy. We build the causal explainer with an introspective variational auto-encoder (IntroVAE) to generate high-resolution counterfactuals. The latent code of IntroVAE is finally optimized with evaluation indicators and prior information to generate the counterfactual explanation, thus revealing the inauthentic details of simulated data explicitly. The proposed framework is validated on four simulated SAR image datasets obtained from electromagnetic models and generative artificial intelligence approaches. The results demonstrate the proposed X-Fake framework outperforms other IQA methods in terms of utility. Furthermore, the results illustrate that the generated counterfactual explanations are trustworthy, and can further improve the data utility in applications.

Original languageEnglish
Pages (from-to)7830-7844
Number of pages15
JournalIEEE Transactions on Image Processing
Volume34
DOIs
StatePublished - 2025

Keywords

  • Bayesian deep learning
  • SAR image generation
  • causal counterfactual
  • explainable artificial intelligence (XAI)
  • image quality assessment (IQA)

Fingerprint

Dive into the research topics of 'X-Fake: Juggling Utility Evaluation and Explanation of Simulated SAR Images'. Together they form a unique fingerprint.

Cite this