Publications

A Scalable Active Framework for Region Annotation in 3D Shape Collections

ACM Transactions on Graphics (Proceedings SIGGRAPH Asia)

Publication date: December 1, 2016

Li Yi, Vladimir Kim, Duygu Ceylan, I-Chao Shen, Mengyan Yan, Hao Su, Cewu Lu, Qixing Huang, Alla Sheffer, Leonidas Guibas

Large repositories of 3D shapes provide valuable input for data- driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate se- mantic region annotations. Given a shape collection and a user- specified region label our goal is to correctly demarcate the corre- sponding regions with minimal manual work. Our active frame- work achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility func- tion that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human effi- ciency. We demonstrate that incorporating verification of all pro- duced labelings within this unified objective improves both accu- racy and efficiency of the active learning procedure. We automati- cally propagate human labels across a dynamic shape network us- ing a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be sig- nificantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by an- notating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collec- tion more than one order of magnitude larger than existing ones.

Learn More