Adobe Research’s innovative imaging, video, audio, 3D, and design technologies showcased at MAX 2023

October 17, 2023

Tags: Adobe MAX Sneaks

At Adobe MAX 2023, Adobe Research’s cutting-edge digital tools took the spotlight in front a large audience of Adobe’s creative customers watching in-person in Los Angeles and joining online. Adobe Research contributed new technologies to empower creators of all kinds. These technologies were incorporated into impactful MAX product releases, and into forward-looking MAX Sneaks, quick peeks at experimental tools that could shape the future.

Contributions to AI-driven MAX releases

Many of Adobe Research’s contributions to this year’s MAX releases were focused on advances in generative-AI based capabilities for Adobe’s creative customers and spanned a wide range of Adobe products.

Adobe Research worked to help specialize the foundational Firefly model to produce crisp vector artwork by fine-tuning the model on vector data, paving the way for strong, controllable vector imagery. With the Illustrator team, Research developed a new vectorization engine that produces concise, editable vector graphics with gradients and fewer total paths and control points, enabling artists to easily incorporate the generated assets into their Illustrator workflows. Adobe Illustrator’s new text-to-vector technology was showcased during the MAX Keynote, as were the groundbreaking Generative Recolor feature and the ReType feature. In addition, the innovative Remove Video Background feature in Adobe Express was also shown in the opening keynote, with important contributions by Research.

Academic partnerships and Premiere Pro releases

Adobe MAX also showcased how Adobe Research partners closely with product teams and academia to bring innovation to customers. For example, the Opening Keynote at MAX featured demos in Premiere Pro of text-based editing, filler detection, and enhance speech, all features first developed by researchers. Research in how text-based editing can be useful for video creation started over 10 years ago, with a long standing collaboration with Stanford Professor Maneesh Agrawala. Together, Adobe researchers, Prof. Agrawala, and his students have developed text-based editing systems for audio stories, interview footage, narrated videos, and dialogue driven scenes. Project Blink was released in beta last year to further develop and understand how search and text-based editing work together for video. Filler word detection first came to customers through Project Blink and now is available to everyone in Premiere Pro. Enhance Speech was conceptualized at Adobe and came to fruition through a multi-year collaboration between Adobe researchers and Princeton Professor Adam Finkelstein. It first made its way to customers as a beta feature in Adobe Podcast and now Premiere Pro.

Content Credentials

MAX 2023 also saw the launch of new Content Credentials icon and branding, as well as new features demoed in the Content Credentials Booth. One feature developed with Adobe Research is image watermarking, which helps Content Credentials persist even on social media where they are usually stripped away. This is done with a new browser extension developed by the Content Authenticity Initiative that integrates Adobe Research novel image watermarking AI.

MAX Sneaks

MAX Sneaks offer a look into what’s ahead, and at this year’s MAX Sneaks session – co-hosted by actor and comedian Adam DeVine – Adobe research scientists and engineers demonstrated for the first time cutting-edge, experimental technologies that could someday become features in Adobe products. This year, many of the MAX Sneaks leveraged generative AI, providing creators with innovative new tools spanning multiple mediums – including photo, video, audio, 3D and design – that can take creativity to a new level.

Here are the Sneaks that Adobe Research contributed to for MAX 2023.

Project Fast Fill

This tool harnesses Generative Fill, powered by Adobe Firefly on video – enabling editors to remove objects or change background elements in videos with the same ease, quality, and fluidity of editing still images.

Project Fast Fill brings generative AI technology into video editing applications, making it easy for users to use simple text prompts to perform texture replacement in videos, even for complex changing surfaces like varying light conditions. Users can use this to edit an object on a single frame, such as to change the art on a latte, and that edit will automatically propagate into the rest of the video’s frames, saving video editors a significant amount of texture editing time.

Presenter: Gabriel Huang is a research engineer at Adobe focusing on video. He has contributed to pioneering technologies that revolutionize video effects editing by making it easier and more accessible. 
Collaborators from Adobe Research: Gabriel Huang, Joon-Young Lee

Project Draw & Delight

Ever get stalled or need help jumpstarting the creative process when trying to bring an idea to reality?

With Project Draw & Delight, creators can use generative AI to guide them along the creation journey, helping transform initial ideas – often represented as rough doodles or scribbles – into polished and refined sketches.

This technology goes beyond text-to-image by providing users with the ability to augment text-based instructions with visual hints, such as rough sketches and paint strokes. Draw & Delight then uses the power of Adobe Firefly to generate high-quality vectors of illustrations or animations in various color palettes, style variations, poses and backgrounds.

Presenter: Souymodip Chakraborty is a computer scientist at Adobe. His interests are in computer graphics and geometry processing.
Collaborator from Adobe Research: Zongze Wu

Project Neo

Incorporating 3D elements into 2D designs such as those used to create infographics, posters, logos or even websites can be difficult to master, and often require designers to learn new workflows.

Project Neo enables designers to create 2D content by using 3D shapes without having to learn traditional 3D creation tools and methods. This technology leverages the best of 3D principles so designers can create 2D shapes with one, two or three-point perspectives easily and quickly. Designers using this technology are also able to collaborate with their stakeholders and make edits to mockups at the vector level so they can quickly make changes to projects.

Presenter: Inigo Quilez is a principal research engineer at Adobe focused on 2D & 3D animation. He is the creator of VR animation tools and has had his computer graphics work featured in several animated blockbusters and VR films.

Collaborators from Adobe Research: Siddhartha Chaudhuri, Jose Echevarria, Kevin Wampler

Project Scene Change

Composition is an essential part of cinematography; it allows filmmakers and video creators to develop a narrative for their content and is vital to keeping viewers engaged with the story as it plays out in a film or short video.  

Project Scene Change makes it easy to composite a subject and a scene from two separate videos – captured with different camera trajectories – into one scene with synchronized camera motion.

Artificial intelligence renders a 3D representation of the background scene from a prerecorded video as if it was captured by a free-moving camera, then composites the separately filmed subject, with proper shadows, into a new scene with compatible motion. This removes any limitations due to the camera motion of existing video assets and allows video editors to place a subject into a new environment with realistic camera motion.

Presenter: Zhan Xu is a research scientist at Adobe Research. He focuses on understanding videos from a 3D perspective and introducing 3D controls into video editing.
Collaborators from Adobe Research: Zhan Xu, Jimei Yang, Kim Pimmel, Kai Zhang, Feng Liu, Zhenzhen Weng (intern), Hao Tan, Xin Sun, Zhoutong Zhang (from NextCam), Zexiang Xu, Sai Bi, Kalyan Sunkavalli, Seoung Wug Oh, Joon-Young Lee, Yizhou Zhao (intern), Chun-Hao Huang

Project Primrose

Today, many designers use Adobe Illustrator to try out new designs. Wouldn’t it be great if they could quickly bring those designs to life in real objects, with the click of a button?

Project Primrose, displayed at MAX as an interactive dress, makes this possible with wearable and flexible, non-emissive textiles which allow an entire surface to display content created with Adobe Firefly, Adobe After Effects, Adobe Stock, and Adobe Illustrator. Designers can layer this technology into clothing, furniture, and other surfaces to unlock infinite style possibilities – such as the ability to download and wear the latest design from a favorite designer.

Presenter: Christine Dierk is a research scientist at Adobe, specializing in human-computer interaction and hardware research initiatives. 
Core team from Adobe Research: Christine Dierk, TJ Rhodes, Gavin Miller
Additional contributors from Adobe Research: Daichi Ito, Oscar Dadfar, Tim Ganter, Giorgio Gori

Project Glyph Ease

When creating flyers or posters, designers often need to manually create each individual letter to maintain a consistent style. This can take a lot of time depending on the specific design and shape of the elements of each character.

Project Glyph Ease uses generative AI to create stylized and customized letters in vector format, which can later be used and edited. All a designer needs to do is create three reference letters in a chosen style, from existing vector shapes or ones they hand draw on paper, and this technology automatically creates the remaining letters in a consistent style. Once created, designers have flexibility to edit the new letters since the letters will appear as live text that can be scaled, rotated, or moved in the project.

Presenter: Difan Liu is a research scientist at Adobe Research, where he focuses on the synthesis and editing of image, vector graphic, and video.
Collaborators from Adobe Research: Difan Liu, Matthew Fisher, Michael Gharbi

Project Poseable

Designing prototypes and storyboards can take hours, requiring creators to make painfully slow edits of each scene, down to the individual pose of a character.

Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.

Instead of having to spend time editing every tiny detail of a scene – the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene – users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.

Presenter: Yi Zhou is a research scientist at Adobe. Her research is focused on autonomous virtual avatars. She mainly works on representation learning for 3D models, human hair and body reconstruction, human motion synthesis, and 3D animation.
Collaborators from Adobe Research: Yi Zhou, Giorgio Gori, Tuanfeng Wang, Chun-Hao Huang, Duygu Ceylan, Yang Zhou, Jimei Yang, Daichi Ito, Nick Kolkin, Yilin Wang, Cherry Zhao

Project Res Up

You’ve probably encountered blurry, low-resolution videos before – maybe they weren’t upscaled to look good on your larger screen size device, or perhaps the video was originally made for SD, but now you are playing it on an HD display.

Project Res Up can help: It’s a video upscaling tool that uses diffusion-based technology and artificial intelligence to convert low-resolution videos to high-resolution videos for applications. Users can directly upscale low-resolution videos to high resolution or zoom-in and crop videos and upscale them to full resolution with high-fidelity visual details and temporal consistency. This is great for those looking to bring new life into older videos or to prevent blurry videos when playing scaled versions on HD screens.

Presenter: Yang Zhou is a research scientist at Adobe, where he works on deep learning-based video generation, and digital humans.
Collaborators from Adobe Research: Yang Zhou, Difan Liu, Feng Liu, Haoran Cai, Jui-hsien Wang, Xue Bai, Cameron Smith, Seoung Wug Oh, Ruppesh Nalwaya, Aseem Agarwala, Baqiao Liu (intern)

Project Dub Dub Dub

As the digital economy grows, so has the need to deliver video content on a global scale. More content creators have a desire to reach new audiences by making their videos and podcasts available to anyone, no matter their location or language.

Project Dub Dub Dub uses generative AI to auto-dub videos or audio clips in more than 70 languages and over 140 dialects. It uses speech-to-speech translation to automatically translate and match the speakers’ voice, tone, cadence, and the acoustics of the original video, whether the video or audio clip is brand new or one from a user’s video archives. All users have to do is press a button to auto-dub content, transforming this historically labor- and cost-intensive process into one that can be completed in minutes.

Presenter: Zeyu Jin is a senior research scientist at Adobe Research. His research is rooted in deep generative models for studio-quality speech enhancement, speech quality assessment and personalized voice generation.
Collaborators from Adobe Research: Zeyu Jin, Rithesh Kumar, Yunyun Wang (intern), Jiaqi Su

Project Stardust

Have you ever taken a photo or created content with Adobe Firefly and wanted to quickly modify specific objects in the image?

Project Stardust relies on image understanding and generative AI to revolutionize image editing. This technology automates time-consuming parts of the image editing process – filling in backgrounds, cutting out objects, blending lighting and color, and more. In addition, the generative AI features let you add objects and make creative transformations. Stardust makes image editing more intuitive, accessible and time efficient for any user, regardless of skill level. 

Presenter: Aya Philémon is a product manager at Adobe who aims to empower others to make the most of their creative potential. Her research projects are inspired by both her professional and personal life experiences.
Core team members from Adobe Research: Jon Brandt, Scott Cohen, Celso Gomes, Eric Stollnitz, Darshan Prasad, Matt Joss, Zhihong Ding, Kevin Smith, Ohi Dibua, Tim Ganter, Mariette Souppe, Jash Guna (Research Alum)
Technology contributors from Adobe Research: Jason Kuen, Qing Liu, Zhe Lin, Luis Figueroa, Daniil Pakhomov, Brian Price, Soo Ye Kim, Zongze (Alex) Wu, Jianming Zhang

Project See Through

When taking pictures, glass reflections can be a nuisance. Reflections can obscure or distract from image subjects, and often make photos completely unusable.

Today, it’s difficult or impossible to manually remove reflections. See Through simplifies the process of cleaning up reflections by using artificial intelligence: Reflections are automatically removed, and optionally saved as separate images for editing purposes. This gives users more control over when and how reflections appear in their photos.

Presenter: Eric Kee is a research scientist at Adobe. His research interests lie in the intersection of computer vision, computational photography, machine learning and visual perception.
Collaborator from Adobe Research: Kevin Blackburn-Matzen

Want to work with our innovative team? We are hiring for full-time roles and internships!

Related Posts