Spatially regularzied sparsecem for target detection in hyperspectral images

Xiaoli Yang, Zeng Li, Jie Chen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

Constrained energy minimization (CEM) is a popular method for target detection in hyperspectral images. Its variant SparseCEM uses a sparsity regularization term to promote the sparsity of the detection output. However, these approaches do not consider the spatial correlation of hyperspectral pixels, and target detection can further benefit from exploiting the spatial information. In this paper, we propose a novel constrained detection algorithm, referred to as Spatial-SparseCEM, to simultaneously force the sparsity of the output and piecewise continuity via proper regularizations. The formulated problem is solved efficiently by using alternating direction method of multipliers (ADMM). We illustrate the enhanced performance of the Spatial-SparseCEM algorithm via both synthetic and real hyperspectral data.

Original languageEnglish
Title of host publication2018 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2765-2768
Number of pages4
ISBN (Electronic)9781538671504
DOIs
StatePublished - 31 Oct 2018
Event38th Annual IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018 - Valencia, Spain
Duration: 22 Jul 201827 Jul 2018

Publication series

NameInternational Geoscience and Remote Sensing Symposium (IGARSS)
Volume2018-July

Conference

Conference38th Annual IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018
Country/TerritorySpain
CityValencia
Period22/07/1827/07/18

Keywords

  • ADMM
  • Constrained energy minimization
  • Hyperspectral image
  • Spatially-regularized detection
  • Target detection
  • ℓ1-norm regularization

Fingerprint

Dive into the research topics of 'Spatially regularzied sparsecem for target detection in hyperspectral images'. Together they form a unique fingerprint.

Cite this