Leveraging Deep Learning to Fix Images

February 8, 2018

By Meredith Alexander Kunz, Adobe Research  

Content-aware fill is like magic for many photographers. It can remove photo imperfections quickly and beautifully. This technology, which originated in Adobe Research and was first released in 2010, remains a go-to tool for saving time and effort while editing images in Photoshop.

But Adobe Research scientists understand that today’s content-aware fill can’t solve everything. Objects that aren’t surrounded by an even background often cannot be fixed this way. That’s why several researchers collaborated on a new project, Deep Fill, to make changing a photo even easier. Their effort employs a form of deep learning known as Generative Adversarial Networks (GANs) to train an innovative system, and it was featured in a Sneak at Adobe MAX 2017.

Ongoing work on Deep Fill has two major thrusts. First, it aims to enable users to remove undesirable objects from an image—which, in the prototype, a user can achieve by masking the spot to fix and letting Deep Fill do the work.

Second, it gives creatives a new tool to rework an image by sketching shapes. In the Sneak demo, Jiahui Yu—a former Adobe Research intern from the University of Illinois at Urbana-Champaign—drew a heart-shaped outline next to a stone arch. Instantly, Deep Fill created a shape mimicking the sketched arch in stone, making a whole new user-designed image.

This is possible through deep learning, says Zhe Lin, principal scientist, who helped develop the new intelligent approach to photo editing.

“Content-aware fill does not use deep learning. It tries to pick patches in the surrounding area to copy in, but it doesn’t understand what objects are actually in the image,” he explains. “Our technology, a form of neural in-painting, is deep learning based. The network has been trained to recognize the image’s semantics, what things appear in it.”

Because the network has learned from 8 million photos, it has the knowledge to fill in imagery that is most appropriate—and to actually create new content for an image rather than relying on what’s nearby. The tool is trained using Generative Adversarial Networks, or GANs, a powerful technology that can learn from data to synthesize new photo-realistic material and that is able to double-check its own work.

“It can generate a new structure you can’t see in an image, like a new mouth or eye,” Zhe says.

Zhe and collaborators are working on refining the technology to make it scalable for very high-resolution photos. They have already made significant improvements to the architecture for the neural networks used in this project, speeding up training to be at least 10 times faster than state-of-the-art systems.

Deep Fill is able to fill in missing pieces of an image thanks to intelligence gained through deep learning. In some cases, it can provide a better result than content-aware fill.

Contributors:

Zhe Lin, Xiaohui Shen, Jimei Yang (Adobe Research)

Xin Lu (Adobe Pro DI team)

Jiahui Yu (University of Illinois at Urbana-Champaign)

 

Related Posts