TY - JOUR
T1 - Revealing the Invisible with Model and Data Shrinking for Composite-Database Micro-Expression Recognition
AU - Xia, Zhaoqiang
AU - Peng, Wei
AU - Khor, Huai Qian
AU - Feng, Xiaoyi
AU - Zhao, Guoying
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2020
Y1 - 2020
N2 - Composite-database micro-expression recognition is attracting increasing attention as it is more practical for real-world applications. Though the composite database provides more sample diversity for learning good representation models, the important subtle dynamics are prone to disappearing in the domain shift such that the models greatly degrade their performance, especially for deep models. In this article, we analyze the influence of learning complexity, including input complexity and model complexity, and discover that the lower-resolution input data and shallower-architecture model are helpful to ease the degradation of deep models in composite-database task. Based on this, we propose a recurrent convolutional network (RCN) to explore the shallower-architecture and lower-resolution input data, shrinking model and input complexities simultaneously. Furthermore, we develop three parameter-free modules (i.e., wide expansion, shortcut connection and attention unit) to integrate with RCN without increasing any learnable parameters. These three modules can enhance the representation ability in various perspectives while preserving not-very-deep architecture for lower-resolution data. Besides, three modules can further be combined by an automatic strategy (a neural architecture search strategy) and the searched architecture becomes more robust. Extensive experiments on the MEGC2019 dataset (composited of existing SMIC, CASME II and SAMM datasets) have verified the influence of learning complexity and shown that RCNs with three modules and the searched combination outperform the state-of-the-art approaches.
AB - Composite-database micro-expression recognition is attracting increasing attention as it is more practical for real-world applications. Though the composite database provides more sample diversity for learning good representation models, the important subtle dynamics are prone to disappearing in the domain shift such that the models greatly degrade their performance, especially for deep models. In this article, we analyze the influence of learning complexity, including input complexity and model complexity, and discover that the lower-resolution input data and shallower-architecture model are helpful to ease the degradation of deep models in composite-database task. Based on this, we propose a recurrent convolutional network (RCN) to explore the shallower-architecture and lower-resolution input data, shrinking model and input complexities simultaneously. Furthermore, we develop three parameter-free modules (i.e., wide expansion, shortcut connection and attention unit) to integrate with RCN without increasing any learnable parameters. These three modules can enhance the representation ability in various perspectives while preserving not-very-deep architecture for lower-resolution data. Besides, three modules can further be combined by an automatic strategy (a neural architecture search strategy) and the searched architecture becomes more robust. Extensive experiments on the MEGC2019 dataset (composited of existing SMIC, CASME II and SAMM datasets) have verified the influence of learning complexity and shown that RCNs with three modules and the searched combination outperform the state-of-the-art approaches.
KW - Micro-expression recognition
KW - composite database
KW - model and data shrinking
KW - parameter-free module
KW - recurrent convolutional network
KW - searchable architecture
UR - http://www.scopus.com/inward/record.url?scp=85090862043&partnerID=8YFLogxK
U2 - 10.1109/TIP.2020.3018222
DO - 10.1109/TIP.2020.3018222
M3 - 文章
AN - SCOPUS:85090862043
SN - 1057-7149
VL - 29
SP - 8590
EP - 8605
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
M1 - 9178431
ER -