TY - JOUR
T1 - Sparse representation with geometric configuration constraint for line segment matching
AU - Wang, Qing
AU - Chen, Tingwang
AU - Si, Lipeng
PY - 2014/6/25
Y1 - 2014/6/25
N2 - In the paper we propose a novel line segment matching method over multiple views based on sparse representation with geometric configuration constraint. The significant idea of the paper is that we transfer the issue of line correspondence into a sparsity based line recognition. At first, line segments are detected by a LSD (line segment detector) and clustered according to spatial proximity to form completed lines. For each point within a line, SIFT is extracted to represent the attribute of point and PHOG is also considered to describe the appearance of the patch centered at the point. SIFT and PHOG are simply concatenated as a single feature vector and then all these point features are put together by a max pooling function to form a distinctive line signature. Then, all line features extracted from training images are trained into a dictionary using sparse coding. Lines with the same similarity may fall together in the high-dimensional feature space. Finally, line segments in a test view are matched to their counterparts in other views by seeking maximal pulses from the coefficient vector. Under our framework, line segments are trained once and matched across all other views. Experimental results have validated the effectiveness of the approach for planar structured scenes under various transformations and degradation, such as viewpoint change, illumination, blur and compression corruption.
AB - In the paper we propose a novel line segment matching method over multiple views based on sparse representation with geometric configuration constraint. The significant idea of the paper is that we transfer the issue of line correspondence into a sparsity based line recognition. At first, line segments are detected by a LSD (line segment detector) and clustered according to spatial proximity to form completed lines. For each point within a line, SIFT is extracted to represent the attribute of point and PHOG is also considered to describe the appearance of the patch centered at the point. SIFT and PHOG are simply concatenated as a single feature vector and then all these point features are put together by a max pooling function to form a distinctive line signature. Then, all line features extracted from training images are trained into a dictionary using sparse coding. Lines with the same similarity may fall together in the high-dimensional feature space. Finally, line segments in a test view are matched to their counterparts in other views by seeking maximal pulses from the coefficient vector. Under our framework, line segments are trained once and matched across all other views. Experimental results have validated the effectiveness of the approach for planar structured scenes under various transformations and degradation, such as viewpoint change, illumination, blur and compression corruption.
KW - Feature representation
KW - Geometric configuration constraint
KW - Line grouping
KW - Line segment matching
KW - Sparse coding
UR - http://www.scopus.com/inward/record.url?scp=84896516861&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2012.12.079
DO - 10.1016/j.neucom.2012.12.079
M3 - 文章
AN - SCOPUS:84896516861
SN - 0925-2312
VL - 134
SP - 100
EP - 110
JO - Neurocomputing
JF - Neurocomputing
ER -