Adaptive noise estimation from highly textured hyperspectral images

Peng Fu, Changyang Li, Yong Xia, Zexuan Ji, Quansen Sun, Weidong Cai, David Dagan Feng

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Accurate approximation of noise in hyperspectral (HS) images plays an important role in better visualization and image processing. Conventional algorithms often hypothesize the noise type to be either purely additive or of a mixed noise type for the signal-dependent (SD) noise component and the signal-independent (SI) noise component in HS images. This can result in application-driven algorithm design and limited use in different noise types. Moreover, as the highly textured HS images have abundant edges and textures, existing algorithms may fail to produce accurate noise estimation. To address these challenges, we propose a noise estimation algorithm that can adaptively estimate both purely additive noise and mixed noise in HS images with various complexities. First, homogeneous areas are automatically detected using a new region-growing-based approach, in which the similarity of two pixels is calculated by a robust spectral metric. Then, the mixed noise variance of each homogeneous region is estimated based on multiple linear regression technology. Finally, intensities of the SD and SI noise are obtained with a modified scatter plot approach. We quantitatively evaluated our algorithm on the synthetic HS data. Compared with the benchmarking and state-of-the-art algorithms, the proposed algorithm is more accurate and robust when facing images with different complexities. Experimental results with real Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images further demonstrated the superiority of our algorithm.

Original languageEnglish
Pages (from-to)7059-7071
Number of pages13
JournalApplied Optics
Volume53
Issue number30
DOIs
StatePublished - 20 Oct 2014

Fingerprint

Dive into the research topics of 'Adaptive noise estimation from highly textured hyperspectral images'. Together they form a unique fingerprint.

Cite this