Modeling urban scenes in the spatial-temporal space

Jiong Xu, Qing Wang, Jie Yang

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

This paper presents a technique to simultaneously model 3D urban scenes in the spatial-temporal space using a collection of photos that span many years. We propose to use a middle level representation, building, to characterize significant structure changes in the scene. We first use structure-from-motion techniques to build 3D point clouds, which is a mixture of scenes from different periods of time. We then segment the point clouds into independent buildings using a hierarchical method, including coarse clustering on sparse points and fine classification on dense points based on the spatial distance of point clouds and the difference of visibility vectors. In the fine classification, we segment building candidates using a probabilistic model in the spatial-temporal space simultaneously. We employ a z-buffering based method to infer existence of each building in each image. After recovering temporal order of input images, we finally obtain 3D models of these buildings along the time axis. We present experiments using both toy building images captured from our lab and real urban scene images to demonstrate the feasibility of the proposed approach.

源语言英语
主期刊名Computer Vision, ACCV 2010 - 10th Asian Conference on Computer Vision, Revised Selected Papers
374-387
页数14
版本PART 2
DOI
出版状态已出版 - 2011
活动10th Asian Conference on Computer Vision, ACCV 2010 - Queenstown, 新西兰
期限: 8 11月 201012 11月 2010

出版系列

姓名Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
编号PART 2
6493 LNCS
ISSN(印刷版)0302-9743
ISSN(电子版)1611-3349

会议

会议10th Asian Conference on Computer Vision, ACCV 2010
国家/地区新西兰
Queenstown
时期8/11/1012/11/10

指纹

探究 'Modeling urban scenes in the spatial-temporal space' 的科研主题。它们共同构成独一无二的指纹。

引用此