基于语义感知的变长序列数据预处理框架

Translated title of the contribution: A framework of variable-length sequence data preprocessing based on semantic perception

Xiaodong Wang, Jiwei Wang, Zhihao Zhong, Huan Yang, Hongjing Yao, Yangming Guo

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning frameworks generally adopt padding or truncation operations toward variable-length sequences in order to use efficient yet intensive batch training. However, padding leads to intensive memory consumption, and truncation inevitably loses the original semantic information. To address this dilemma, a variable-length sequence preprocessing framework based on semantic perception is proposed, which leverages a typical unsupervised learning method to reduce the different dimensionality to the exact size and minimize information loss. Under the theoretical umbrella of minimizing information loss, information entropy is adopted to measure the semantic richness, weights to variable-length representations is assigned, and the semantic richness is used to fuse them. Extensive experiments show that the information loss of the present strategy is less than the truncated embeddings, and the apparent superiority of the present method in gaining more information capability and achieving promising performance on several text classification datasets.

Translated title of the contributionA framework of variable-length sequence data preprocessing based on semantic perception
Original languageChinese (Traditional)
Pages (from-to)388-397
Number of pages10
JournalXibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University
Volume43
Issue number2
DOIs
StatePublished - Apr 2025

Fingerprint

Dive into the research topics of 'A framework of variable-length sequence data preprocessing based on semantic perception'. Together they form a unique fingerprint.

Cite this