Abstract
Head motion naturally occurs in synchrony with speech and carries important intention, attitude and emotion factors. This paper aims to synthesize head motions from natural speech for talking avatar applications. Specifically, we study the feasibility of learning speech-to-head-motion regression models by two types of popular neural networks, i.e., feed-forward and bidirectional long short-term memory (BLSTM). We discover that the BLSTM networks apparently outperform the feedforward ones in this task because of their capacity of learning long-range speech dynamics. More interestingly, we observe that stacking different networks, i.e., inserting a feed-forward layer into two BLSTM layers, achieves the best performance. Subjective evaluation shows that this hybrid network can produce more plausible head motions from speech.
Original language | English |
---|---|
Pages (from-to) | 3345-3349 |
Number of pages | 5 |
Journal | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
Volume | 2015-January |
DOIs | |
State | Published - 2015 |
Event | 16th Annual Conference of the International Speech Communication Association, INTERSPEECH 2015 - Dresden, Germany Duration: 6 Sep 2015 → 10 Sep 2015 |
Keywords
- BLSTM
- Head motion synthesis
- Neural networks
- Talking avatar