Stitch Together a Perfect Scene with Experimental Tool

January 9, 2018

By Meredith Alexander Kunz, Adobe Research

We’ve all taken photos with some big, unsightly element we’d like to remove. But if we take it out, what to put in its place? That question is especially tough if the object occupies a lot of space, like a group of buildings in a landscape. But it could also be a creative moment, if you could imagine and explore other possible images to fill in the gap.

Around ten years ago, visual computing scholars began studying how to fill a large hole in a photo with portions of other photos. The results were promising, but still too limited for practical use.

This year, Adobe Research’s Brian Price, senior research scientist, thought the time had come to “revisit this problem with new deep learning techniques—to try to do it better,” he explains. Deep learning, a potent form of AI that uses neural networks to find patterns and identify similarities, could help locate an ideal replacement visual for an image’s empty space, Price thought. He and colleagues set about working on a new technology to make that not only possible, but approachable for users. He demonstrated the tool, Scene Stitch, in an Adobe MAX Sneak in October 2017.

Scene Stitch gives creatives the chance to re-imagine an image and to easily explore a range of possibilities to change it. The user begins with an image and selects which element to remove—for example, a cluster of skyrises in the foreground of a landscape shot. Then, the experimental tool employs deep learning to retrieve parts of other images in a linked database to fill in that spot—for instance, with an image of a lake or a park.

The system reduces each image to a tiny encoded description, and the network can find similar images based on the encoding, Price says. The user can then review the network-generated potential matches to see how they might improve the image.

Scene Stitch is complex because not only does it need to find an appropriate match, but it must also discover which pixels best fit the hole, cut the right shape out of the image, and blend it into the existing photo to create a realistic new visual.

Price’s Sneak showed the experiment tool at work. Future research will focus on the cutting and blending steps, as well as a ranking of image matches—also drawing upon deep learning. “You could retrieve 100 different results and show all those to user, but that’s a lot of information. We want to have a neural network to rank all that,” Price says.

Contributors:

Brian Price, senior research scientist; Scott Cohen, principal scientist; Zhe Zhu, intern and student at Tsinghua University; Mingyang Ling, research software developer (Adobe Research)

 

Recent Posts