How Adobe Research is helping podcasters make quick, compelling, social media-ready teasers

November 6, 2025

Tags: AI & Machine Learning, Computer Vision, Imaging & Video, Human Computer Interaction

When Senior Research Scientist Ding Li began collaborating with his intern, Sitong Wang, in the summer of 2023, the two had a goal: to help people turn long-form videos into short clips. Their work paved the way for Clip Maker, a new AI-powered feature available free in Adobe Express, that automatically creates compelling, social media-ready teasers for video podcasts and other long-form videos.

The early research behind Clip Maker

“Our research really began by looking at what makes a good teaser—and what makes a teaser different from a summary,” remembers Li.

“The goal of a teaser is to get you to go and watch the whole video, not just to tell you the key points. It gets you excited about learning more,” adds Mira Dontcheva, Senior Principal Scientist and one of the researchers who worked on Clip Maker.

To figure out what makes a teaser truly compelling, the team talked with video editors and creators. Those conversations uncovered several key elements: a clip needs to stand alone, pique curiosity, tap into humor, and hook the audience. With these insights, the team began working on an AI workflow and algorithms that could identify and help knit together the best moments for an enticing clip.

“We know that LLMs (large language models) are pretty good at finding key pieces, so first we built a lot of flexible tools that allowed people to easily put those selected elements together, and then we wanted to help users quickly go through the stages that usually take a lot of time—like finding music, changing aspect ratios, and adding branding,” says Li.

This early, experimental version of the technology was developed inside Adobe Premiere Pro, where the team was able to leverage Premiere Pro’s professional editing tools and to focus on allowing human users to co-create with AI—while still maintaining a lot of control. “The most interesting challenge at this stage,” remembers Dontcheva, “was to figure out which controls to expose to our users so that the process would be useful and not too overwhelming.”

In the fall of 2023, the team published their research on building a human-AI co-creative tool for crafting video teasers.

How Clip Maker went from a research project to a feature inside Adobe Express

With a prototype up and running, Li and Dontcheva created a demo to share with product organizations. The Adobe Express team was intrigued right away. Express, which is designed for people with a wide range of technical design skills, had been hearing from users—especially video podcasters—who wanted an easier way to create multiple clips to promote long-form videos on social media. Making the process quick was an especially high priority since podcasters often want to promote time-sensitive content.

To meet user needs, researchers and the Express product team collaborated to develop a simple, user-friendly interface for Clip Maker, including pared down controls that made the product very easy to use, even for novices.

As the tool took shape, the teams also started a new round of evaluations to ensure the feature would work well for Express users. They began by manually evaluating clips with a rubric that included criteria such as usefulness and whether the clip could stand alone. “But we discovered that there wasn’t always agreement between researchers–judging clips is such a subjective task. Evaluating clip quality remains an open research problem,” says Dontcheva.

To move beyond subjective evaluations, the team added objective criteria, such as making sure clips never end in the middle of a sentence or refer to something in the video that the viewer wouldn’t already know about. And they expanded their study, in partnership with the evaluation team, to gather more data about user preferences and hone the model for Express users’ needs.

This spring, Clip Maker was ready for Adobe Express. The new tool, which is free and available now in Adobe Express, lets users upload or import a video into Adobe Express and automatically generate clips that capture the video’s engaging moments. Users can easily add AI-generated subtitles, customize with branding and animation, add music, and then publish their clips to Instagram Reels, TikTok, and other social channels.

“We were thrilled to get Clip Maker out into customers’ hands and get feedback,” says Li. The responses so far have been overwhelmingly positive. And the team has even made tweaks based on feedback—when users asked for the ability to change a video’s aspect ratio from portrait to horizontal or square, the Research team already had the capability in their original research, so they were able to integrate it right away.

The future of Clip Maker technology

As Li and Dontcheva look ahead, they’re excited about how their technology will impact Express users, and where it might go next. “I hope that, with Clip Maker, users will want to stay in the Adobe ecosystem longer because the tool makes their process for creating clips and content scheduling smoother,” says Li.

“And we’d love to see this technology in all of the places where people work with video and make clips,” adds Dontcheva.

Further into the future, the team is imagining new ways that AI can help video creators do more with their work. “Our lab is focused on storytelling and narrative, so we’re exploring, more broadly, how to help people create videos more easily and flexibly. I think there are great new opportunities for these tools to help users tell their stories,” says Li.

Wondering what else is happening inside Adobe Research? Check out our latest news here.

Recent Posts