Computer vision has made impressive gains through the use of deep learning models, trained with large-scale labeled data. However, labels require expertise and curation and are expensive to collect. Even worse, direct semantic supervision often leads the learning algorithms to "cheat" and take shortcuts, instead of actually doing the work. Can one discover useful visual representations without the use of explicitly curated labels? In this talk, I will present several case studies exploring the paradigms of self-supervision, meta-supervision, and curiosity — all ways of using the data as its own supervision. Applications in image synthesis will be shown, including automatic colorization, paired and unpaired image-to-image translation (aka pix2pix and cycleGAN), curiosity-based exploration, and, terrifyingly, #edges2cats.