TextureGAN: Controlling Deep Image Synthesis with Texture Patches

IEEE Conference on Computer Vision and Pattern Recognition (CVPR Spotlight) , 2018

Publication date: June 17, 2018

Wenqi Xian, Patsorn Sankloy, Varun Agrawal, Amit Raj, Jingwan (Cynthia) Lu, Chen Fang, Fisher Yu, James Hays

In this paper, we investigate deep image synthesis guided by sketch, color, and texture. Previous image synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control. We allow a user to place a texture patch on a sketch at arbitrary location and scale to control the desired output texture. Our generative network learns to synthesize objects consistent with these texture suggestions. To achieve this, we develop a local texture loss in addition to adversarial and content loss to train the generative network. The new local texture loss can improve generated texture quality without knowing the patch location and size in advance. We conduct experiments using sketches generated from real images and textures sampled from the Describable Textures Dataset and results show that our proposed algorithm is able to generate plausible images that are faithful to user controls. Ablation studies show that our proposed pipeline can generate more realistic images than adapting existing methods directly.

Learn More