A convenient multi-camera self-calibration method based on human body motion analysis

Xiuwei Zhang, Yanning Zhang, Xingong Zhang, Tao Yang, Xiaomin Tong, Haichao Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

A novel and convenient multi-camera self-calibration method is proposed in this paper. Different from other calibration methods, our method is done by analyzing human body motion. The only constraint is that several people of different heights are needed to walk around the experimental environment one by one in the calibration period. By this way, two kinds of corresponding points are extracted from synchronous video sequences. One is the centroid of the moving human body. The other is points on the floor, which is extracted by matching floor planes in video sequences. The floor planes registration is based on shadow detection and co-motion feature. Based on these corresponding points, camera parameters and 3D points observed are estimated. The proposed method is tested in our own experimental environment. Experimental results show the accuracy of our calibration method. Our method can satisfy many applications of multi-view computer vision.

Original languageEnglish
Title of host publicationProceedings of the 5th International Conference on Image and Graphics, ICIG 2009
PublisherIEEE Computer Society
Pages3-8
Number of pages6
ISBN (Print)9780769538839
DOIs
StatePublished - 2009
Event5th International Conference on Image and Graphics, ICIG 2009 - Xi'an, Shanxi, China
Duration: 20 Sep 200923 Sep 2009

Publication series

NameProceedings of the 5th International Conference on Image and Graphics, ICIG 2009

Conference

Conference5th International Conference on Image and Graphics, ICIG 2009
Country/TerritoryChina
CityXi'an, Shanxi
Period20/09/0923/09/09

Fingerprint

Dive into the research topics of 'A convenient multi-camera self-calibration method based on human body motion analysis'. Together they form a unique fingerprint.

Cite this