The SOMN-HMM model and its application to automatic synthesis of 3d character animations

Yi Wang, Lei Xie, Zhi Qiang Liu, Li Zhu Zhou

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

Learning HMM from motion capture data for automatic 3D character animation synthesis is becoming a hot spot in research areas of computer graphics and machine learning. To ensure realistic synthesis, the model must be learned to fit the real distribution of human motion. Usually the fitness is measured by likelihood. In this paper, we present a new HMM learning algorithm, which incorporates stochastic optimization technique within the Expectation- Maximization (EM) learning framework. This algorithm is less prone to be trapped in local optimal and converges faster than traditional Baum-Welch learning algorithm. We apply the new algorithm to learning 3D motion under control of a style variable, which encodes the mood or personality of the performer. Given new style value, motions with corresponding style can be generated from the learned model.

Original languageEnglish
Title of host publication2006 IEEE International Conference on Systems, Man and Cybernetics
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4948-4952
Number of pages5
ISBN (Print)1424401003, 9781424401000
DOIs
StatePublished - 2006
Externally publishedYes
Event2006 IEEE International Conference on Systems, Man and Cybernetics - Taipei, Taiwan, Province of China
Duration: 8 Oct 200611 Oct 2006

Publication series

NameConference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
Volume6
ISSN (Print)1062-922X

Conference

Conference2006 IEEE International Conference on Systems, Man and Cybernetics
Country/TerritoryTaiwan, Province of China
CityTaipei
Period8/10/0611/10/06

Fingerprint

Dive into the research topics of 'The SOMN-HMM model and its application to automatic synthesis of 3d character animations'. Together they form a unique fingerprint.

Cite this