Recent advancements in deep generative models have made it possible to produce photo-realistic images for a variety of tasks. However, generated images often have perceptual artifacts in certain regions that require manual retouching. In this paper, we conduct an extensive empirical study of Perceptual Artifacts Localization (PAL) on diverse image synthesis tasks. We introduce a new dataset of 10,168 generated images with per-pixel perceptual artifact labels for ten image synthesis tasks. We successfully train a segmentation model on this proposed dataset to reliably localize artifacts in diverse tasks and demonstrate that our pretrained model can efficiently adapt to unseen models with as few as ten images. Moreover, we propose a simple yet effective zoom-in inpainting pipeline to automatically fix perceptual artifacts in generated images. In our experiments, we illustrate several useful downstream applications, including automatically fixing artifacts, evaluating image quality without reference, and detecting abnormal regions in images.