TY - JOUR
T1 - Deep Spatiality
T2 - Unsupervised Learning of Spatially-Enhanced Global and Local 3D Features by Deep Neural Network with Coupled Softmax
AU - Han, Zhizhong
AU - Liu, Zhenbao
AU - Vong, Chi Man
AU - Liu, Yu Shen
AU - Bu, Shuhui
AU - Han, Junwei
AU - Chen, C. L.Philip
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2018/6
Y1 - 2018/6
N2 - The discriminability of the bag-of-words representations can be increased via encoding the spatial relationship among virtual words on 3D shapes. However, this encoding task involves several issues, including arbitrary mesh resolutions, irregular vertex topology, orientation ambiguity on 3D surface, invariance to rigid, and non-rigid shape transformations. To address these issues, a novel unsupervised spatial learning framework based on deep neural network, deep spatiality (DS), is proposed. Specifically, DS employs two novel components: spatial context extractor and deep context learner. Spatial context extractor extracts the spatial relationship among virtual words in a local region into a raw spatial representation. Along a consistent circular direction, a directed circular graph is constructed to encode relative positions between pairwise virtual words in each face ring into a relative spatial matrix. By decomposing each relative spatial matrix using singular value decomposition, the raw spatial representation is formed, from which deep context learner conducts unsupervised learning of the global and local features. Deep context learner is a deep neural network with a novel model structure to adapt the proposed coupled softmax layer, which encodes not only the discriminative information among local regions but also the one among global shapes. Experimental results show that DS outperforms state-of-the-art methods.
AB - The discriminability of the bag-of-words representations can be increased via encoding the spatial relationship among virtual words on 3D shapes. However, this encoding task involves several issues, including arbitrary mesh resolutions, irregular vertex topology, orientation ambiguity on 3D surface, invariance to rigid, and non-rigid shape transformations. To address these issues, a novel unsupervised spatial learning framework based on deep neural network, deep spatiality (DS), is proposed. Specifically, DS employs two novel components: spatial context extractor and deep context learner. Spatial context extractor extracts the spatial relationship among virtual words in a local region into a raw spatial representation. Along a consistent circular direction, a directed circular graph is constructed to encode relative positions between pairwise virtual words in each face ring into a relative spatial matrix. By decomposing each relative spatial matrix using singular value decomposition, the raw spatial representation is formed, from which deep context learner conducts unsupervised learning of the global and local features. Deep context learner is a deep neural network with a novel model structure to adapt the proposed coupled softmax layer, which encodes not only the discriminative information among local regions but also the one among global shapes. Experimental results show that DS outperforms state-of-the-art methods.
KW - coupled softmax
KW - Deep spatial
KW - directed circular graph
KW - spatially-enhanced 3D features
UR - http://www.scopus.com/inward/record.url?scp=85044035141&partnerID=8YFLogxK
U2 - 10.1109/TIP.2018.2816821
DO - 10.1109/TIP.2018.2816821
M3 - 文章
C2 - 29993805
AN - SCOPUS:85044035141
SN - 1057-7149
VL - 27
SP - 3049
EP - 3063
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
IS - 6
ER -