Publications

Any-resolution Training for High-resolution Image Synthesis

European Conference on Computer Vision (ECCV'22)

Publication date: October 27, 2022

Lucy Chai, Michaƫl Gharbi, Eli Shechtman, Philip Isola, Richard Zhang

Adobe Research thumbnail image

Generative models operate at fixed resolution, even though natural images come in a variety of sizes. As high-resolution details are downsampled away, and low-resolution images are discarded altogether, precious supervision is lost. We argue that every pixel matters and create datasets with variable-size images, collected at their native resolutions. Taking advantage of this data is challenging; high-resolution processing is costly, and current architectures can only process fixed-resolution data. We introduce continuous-scale training, a process that samples patches at random scales to train a new generator with variable output resolutions. First, conditioning the generator on a target scale allows us to generate higher resolutions images than previously possible, without adding layers to the model. Second, by conditioning on continuous coordinates, we can sample patches that still obey a consistent global layout, allowing for scalable training. Controlled FFHQ experiments show our method takes advantage of the multiresolution training data better than discrete multi-scale approaches, achieving better FID scores and cleaner high-frequency details. We also train on other natural image domains including churches, mountains, and birds, and demonstrate arbitrary scale synthesis with both coherent global layouts and realistic local details, going beyond 2K resolution in our experiments.

Learn More