TY - JOUR
T1 - Hierarchical sparse coding from a Bayesian perspective
AU - Zhang, Yupei
AU - Xiang, Ming
AU - Yang, Bo
N1 - Publisher Copyright:
© 2017 Elsevier Ltd
PY - 2018/1/10
Y1 - 2018/1/10
N2 - We consider the problem of hierarchical sparse coding, where not only a few groups of atoms are active at a time but also each group enjoys internal sparsity. The current approaches are usually to achieve between-group sparsity using the ℓ1 penalty, such that many groups have small coefficients rather than being accurately zeroed out. The trivial groups may incur the proneness to overfitting of noise and are thereby harmful to interpretability of sparse representation. To this end, we in this paper reformulate the hierarchical sparse model from a Bayesian perspective employing twofold priors: the spike-and-slab prior and the Laplacian prior. The former is utilized to explicitly induce between-group sparsity, while the latter is adopted for both inducing within-group sparsity and obtaining a small reconstruction error. We propose a nest prior by integrating the both priors to result in hierarchical sparsity. The resultant optimization problem can be delivered a convergence solution in a few iterations via the proposed nested algorithm, corresponding to the nested prior. In experiments, we evaluate the performance of our method on signal recovery, image inpainting and sparse representation based classification, with simulated signals and two publicly available image databases. The results manifest that the proposed method, compared with the popular methods for sparse coding, can yield more concise representation and more reliable interpretation of data.
AB - We consider the problem of hierarchical sparse coding, where not only a few groups of atoms are active at a time but also each group enjoys internal sparsity. The current approaches are usually to achieve between-group sparsity using the ℓ1 penalty, such that many groups have small coefficients rather than being accurately zeroed out. The trivial groups may incur the proneness to overfitting of noise and are thereby harmful to interpretability of sparse representation. To this end, we in this paper reformulate the hierarchical sparse model from a Bayesian perspective employing twofold priors: the spike-and-slab prior and the Laplacian prior. The former is utilized to explicitly induce between-group sparsity, while the latter is adopted for both inducing within-group sparsity and obtaining a small reconstruction error. We propose a nest prior by integrating the both priors to result in hierarchical sparsity. The resultant optimization problem can be delivered a convergence solution in a few iterations via the proposed nested algorithm, corresponding to the nested prior. In experiments, we evaluate the performance of our method on signal recovery, image inpainting and sparse representation based classification, with simulated signals and two publicly available image databases. The results manifest that the proposed method, compared with the popular methods for sparse coding, can yield more concise representation and more reliable interpretation of data.
KW - Bayesian framework
KW - Hierarchical sparse modeling
KW - Laplacian prior
KW - Sparse representation based classification
KW - Spike-and-slab prior
UR - http://www.scopus.com/inward/record.url?scp=85025467371&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2017.06.076
DO - 10.1016/j.neucom.2017.06.076
M3 - 文章
AN - SCOPUS:85025467371
SN - 0925-2312
VL - 272
SP - 279
EP - 293
JO - Neurocomputing
JF - Neurocomputing
ER -