A novel weight fusion approach for multi-focus image based on NSST transform domain

Feng Wang, Yongmei Cheng

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

In this paper, a novel weight multi-focus image fusion method based on the non-subsampled shearlet transform (NSST) is proposed. Firstly, the NSST is employed to decompose the source image into low frequency and high frequency sub-band coefficients with different-scale and different-directional; Secondly, the local least root mean square error (RMSE) is used as a new fusion weight for the fusion of low frequency sub-band coefficients; In order to effectively preserve the edges of the image, a novel fusion rule based on edge preserving weight is proposed to fuse high frequency sub-band coefficients. Finally, the fused image can be obtained through the inverse NSST on the fused coefficients. The experimental results show that the fused image not only contains rich details, but also preserves the notable structural features. Compare with the current state-of-the-art fusion methods, it produces better visual effect and quality assessment.

Original languageEnglish
Title of host publicationCGNCC 2016 - 2016 IEEE Chinese Guidance, Navigation and Control Conference
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2250-2254
Number of pages5
ISBN (Electronic)9781467383189
DOIs
StatePublished - 20 Jan 2017
Event7th IEEE Chinese Guidance, Navigation and Control Conference, CGNCC 2016 - Nanjing, Jiangsu, China
Duration: 12 Aug 201614 Aug 2016

Publication series

NameCGNCC 2016 - 2016 IEEE Chinese Guidance, Navigation and Control Conference

Conference

Conference7th IEEE Chinese Guidance, Navigation and Control Conference, CGNCC 2016
Country/TerritoryChina
CityNanjing, Jiangsu
Period12/08/1614/08/16

Fingerprint

Dive into the research topics of 'A novel weight fusion approach for multi-focus image based on NSST transform domain'. Together they form a unique fingerprint.

Cite this