Publications

Learning to Generate Textures on 3D Meshes

CVPR 2019 Workshop on Deep Generative models for 3D understanding

Publication date: June 15, 2019

Amit Raj, Cusuh Ham, Connelly Barnes, James Hays, Vladimir Kim, Jingwan (Cynthia) Lu

Best paper award

Recent years have seen a great deal of work in photorealistic neural image synthesis from 2D image datasets. However, there are only a few works that exploit 3D shape information to aid in image synthesis. To this end, we leverage data from 2D image datasets as well as 3D model corpora to generate textured 3D models. We propose a framework for texture generation for meshes from multiview images. Our framework first uses 2.5D information rendered using the 3D models, along with user inputs to generate an intermediate view dependent representation. These intermediate representations are then used to generate realistic textures for particular views in an unpaired manner. Finally, we use a differentiable renderer to combine the generated multiview texture into a single textured mesh. We demonstrate results of realistic texture synthesis on cars.

Learn More