
Back in 2018, Adobe MAX attendees got a first peek at a breathtaking new technology-in-development: Project Fast Mask. With Fast Mask, video editors can select an object with just a few clicks, and then the tool applies that selection (known as a mask) through an entire video in real time, enabling users to edit the background, place text behind the object, create bokeh effect in video as easily as they can in a still image. Before Project Fast Mask, video masking was a tedious, frame-by-frame process.
To imagine using Project Fast Mask, think of a video of a couple dancing. As the editor, you can simply select the couple and preserve every twirl and step—even as the couple moves across the screen and behind objects—as you tweak colors, backgrounds, and layers.
Since its debut, Fast Mask technology, now in its third generation, has been built into Adobe After Effects and Adobe Express.
Fast Mask’s path from an Adobe Research idea to a feature inside Adobe products
According to Project Fast Mask’s lead researcher, Research Scientist Joon-Young Lee, the idea for Fast Mask goes back to 2017. “At that time, I was thinking about tools for still images. When people do fine-grain editing in Photoshop, for example, selecting an object is one of the first things they do. And I realized we didn’t have a good tool for this in the video space. For video, it was a difficult, time-consuming process of streaming, cutting, and timeline editing. So my big inspiration was to bring object selection to video.”
Lee and his team spent nearly two years working on the idea. Early on, they published papers about their research, and by 2018 the team had a concrete solution. The work was selected for an Adobe MAX Sneak, and the team quickly prepared a presentation to unveil Project Fast Mask to the world.
“The responses from the media and MAX attendees were really great. At the time, people were spending so much time doing frame-by-frame editing, and there were lots of people who’d never even tried to do it because it took so much time and the process rarely worked well. And they’d never seen a machine learning model address a problem like this in video editing, so they felt like something new was happening. It was exciting,” remembers Lee.
After MAX, Lee and his team began collaborating with the After Effects team to bring the technology into Adobe products. The product team already knew how people liked to use After Effects tools, so they didn’t need to make big changes to the user interface. But the challenge was figuring out how to replace some of the underlying technology with the Fast Mask model and make it work with both the product and users’ devices.
“There was no established pipeline to deliver this kind of machine learning model for video into the product. There were new challenges that hadn’t been considered before, so there wasn’t anyone who knew how to do it yet. We had to do a lot of experimentation to figure it out,” says Lee.
In 2020, two years after its reveal at MAX, Fast Mask technology became part of Roto Brush in After Effects. Now, with automatic segmentation, video editors have more power when they want to do localized color editing, or to separate and add layers—without the time-consuming task of masking each frame.
“As soon as we delivered the technology in Roto Brush 2, we started developing the next version. We knew we could make it better, and in the end, we were able to develop a new model that was even more stable,” says Lee.
In 2023, the new Fast Mask model shipped in Roto Brush 3 in After Effects (also known as Next-Gen Roto Brush)—and it made its debut as Remove Video Background in Adobe Express.
“Adding our technology to Express was an exciting development. With Roto Brush in After Effects, we wanted to reach professional and expert users. But Express is intended for non-professional users, who are looking to create content quickly in a few taps, so we didn’t want them to have to do too many steps. The tool is fully automatic. Users can simply click to remove the background, rather than making lots of decisions about what to select and not select, like they do in After Effects,” explains Lee.
This past fall, Fast Mask technology stunned again at MAX with a preview of the new Object Selection tool for Premiere Pro. The new tool, which isn’t publicly available yet but coming soon, will offer the ability to select an object and track it for Premiere Pro users through an updated, more powerful model.
Fast Mask, generative AI, and power to the creatives
According to Lee, the impact of Fast Mask is especially important in the context of generative AI. “I’m thinking about the ways that generative AI helps us build concepts, but then we’re often limited by text prompts. With tools like Fast Mask, we can pair generative AI with very precise user control. That’s what’s so important about this technology—we’re bringing user intention and controllability into the generative process,” says Lee. “I also think about those cases where you may not want to use a lot of generative AI. For example, when I’m editing my family videos, I want precise editing technology that I can guide on my own. Tools like Fast Mask give users that power, too.”
Wondering what else is happening inside Adobe Research? Check out our latest news here.