Every year, Adobe MAX brings people together for a celebration of creativity and a peek at cutting-edge innovations designed to put power and possibilities into the hands of creators. This year, the work of Adobe Research was behind ground-breaking innovations in generative AI for video along with many other new technologies. And, from the main stage, Researchers demoed never-before-seen tools for automatically turning custom shapes into animations, generating video sound effects with simple prompts, removing unwanted flashes and flickers from photos and videos, tracking the origins of images and videos, and more.
Adobe Research’s innovations featured in keynotes: highlights
Adobe Researchers’ work was behind big advancements in the Adobe Firefly family that were announced at MAX, including the new Generate Video (Beta) module in the Adobe Firefly web application—technology that was featured in the MAX 2024 keynote. It’s a powerful, creator-friendly tool for generating video using text and image prompts. Designed with professionals in mind, the Video Model is ideal for the ideation phase of a video shoot, filling gaps in a timeline, or planning your creative intent before capturing expensive, complicated shots. Users have more control than ever, with the ability to iterate from previous prompts and select important camera details, such as shot size, angle, and motion. The Video Model was designed to be commercially safe and, in keeping with Adobe’s AI Ethics principles, it was trained exclusively on licensed and public domain content.
Teams across Adobe Research collaborated with the Firefly team to ship the Firefly Video Model. This effort began in Research, and Research built the first complete pipeline for training video generation models at Adobe. Research also created core technologies for many of the final shipping components, including architecture and training strategy, an autoencoder, super-resolution, camera and shot-type control, and dataset work. The Firefly Video Model powers Generative Extend (beta) in Premiere Pro beta, Text to Video & Image to Video (beta) in the Firefly web app. Users who want to access the new beta version of the Adobe Firefly Video Model can join the waitlist now.
Also included in the MAX mainstage keynote were numerous other technologies that were created by Adobe Research, including distraction removal in Photoshop; improvements in Generative Fill and Generative Expand for Photoshop; the Object Selection in Premiere Pro demo in the opening keynote; innovations in Substance 3D Viewer; Generative Extend for video and audio in Premiere Pro; Animate All in Express; Image Trace in Illustrator; several technologies for Project Neo, including scene-to-image, send-to-Illustrator tools, and Fonts; and Durable Content Credentials which underpin new products from Adobe’s Content Authenticity Initiative (CAI), including the Adobe Content Authenticity web app and Chrome Browser Extension. (Some of these features are currently in beta.)
Sharing Sneaks with the world
One of the most anticipated elements of every Adobe MAX is the Sneaks. It’s when the innovators behind some of the most exciting in-development technologies unveil their work for the very first time.
In the weeks before MAX, Adobe Researchers were hard at work crafting demos, polishing scripts, and imagining what it would be like to share their latest work with a live audience of thousands.
To build excitement in the days leading up to the big event, Adobe Researchers released three mini-Sneaks:
- Project Generative Physics
With Project Generative Physics, realistic physics are a snap with a simple text prompt that can generate subjects which interact with their environment. Adobe Research had key contributions to this project.
Contributors: Tim Langlois, Zhen Chen, Jeremie Dumas, Raymond Fei, Vineet Batra, Ankit Phogat, Sumit Dhingra, Aditya Veer Singh, Ashish Jindal, Homi Raghuvanshi, and Danny Kaufman - Web-based Painting
What if you could paint, blend, and flow like a watercolor master on a digital canvas? Introducing Web-based Painting, an experimental technology developed by the Adobe Research team in Paris. This technology captures the essence of watercolor painting, where water and color seamlessly blend on the canvas – all within a web browser.
Contributors: Zoé Herson, Axel Paris, Élie Michel, Lois Paulin, Jose Echevarria, Daichi Ito, Tamy Boubekeur - Project TypeLab
Take your text effects to the next level with Project Type Lab, a close collaboration between Adobe Research and the Type team. This experimental technology allows a user to generate, edit and reposition text seamlessly within your design using generative AI.
Contributors: Zhaowen Wang, Zhifei Zhang, Brian Price, Nipun Jindal
Then, on the MAX main stage, Adobe Researchers were joined by this year’s host, actress, comedian, and rapper Awkwafina, to unveil their work to the crowd.
Project Clean Machine
With Project Clean Machine, unwanted flickering from things like photo flashes or fireworks, as well as objects briefly blocking the camera, can easily be removed from videos. It can even detect and automatically remove these camera flashes automatically, making it super handy for cleaning up footage.
Presenter: Gabriel Huang
Collaborator: Simon Niklaus
Project In Motion
Project In Motion lets you turn a custom shape animation into a video by simply describing what you want—like “pistachio ice cream.” A second element of the technology allows you to paint the animation with rich colors and textures by describing the effects you’d like, such as “watercolor.” Plus, you can add a style reference image to mix-and-match styles with your prompt for a unique touch.
Presenter: Li-Yi Wei
Collaborators: Rubaiat Habib, Wil Li, Seth Walker, Jakub Fiser, Jun Saito, Duygu Ceylan, Daife Qin (intern), Tuanfeng Wang, James Ratliff, Val Head, Timothy Langlois, Tidjane Tall, Tomasz Opasinski, and Brooke Hopper
Project Super Sonic
With Project Super Sonic, you can generate sound effects for your video simply by using a prompt, or you can click on objects in the video to create sounds without writing a prompt. You can even control the timing of these sounds with your voice, making it super intuitive. Add and layer sounds directly in the timeline, mix background and foreground effects, and choose from multiple generated variations for each prompt to get just the right sound. It’s a cool way to enhance your videos with custom audio that fits perfectly.
Presenter: Justin Salamon
Collaborators: Prem Seetharaman, Oriol Nieto, Hugo Flores Garcia (intern), Lee Brimelow, Yaniv De Ridder, Adolfo Hernandez Santisteban, Gabi Duncombe, and Mary Tran (intern)
Project Scenic
Project Scenic makes 2D image creation easier by letting you build a 3D scene layout with a copilot prompt. This tool helps you control the camera and tweak individual objects, guiding the image generation process. This system reduces the trial-and-error of adjusting layouts and camera views by letting you edit 3D scenes more precisely. Plus, regional prompting lets you make specific changes to objects in your images.
Presenter: Yu Shen
Collaborators: Stefano Petrangeli, Matheus Gadelha, Haoliang Wang, Gang Wu, Cuong Nguyen, and Saayan Mitra
Project Perfect Blend
Project Perfect Blendis a generative harmonization tool that makes compositing easier. When adding people or objects into another image, it will adjust color, lighting, shadows, and reflections to blend the foreground and background layers naturally. This project focuses on natural blending, foreground relighting, and realistic shadow casting to make the process smoother and more lifelike.
Presenter: Mengwei Ren
Collaborators: He Zhang, Zhixin Shu, Jae Shin Yoon, Qing Liu, Zijun Wei, Zhe Lin, HyunJoon Jung, Jianming Zhang, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Niviru Wijayaratne, and TJ Rhodes
Project Know How
Project Know How allows you to track the origins of images and videos, even if they’ve been printed and captured from physical objects. Adobe’s implementation of Content Credentials is durable because of a combination of secure metadata, invisible watermarking, and fingerprinting technology. This project is an example of how Adobe aims to build trust by transparently showing the content’s origin, whether digital or physical.
Presenter: Shruti Agarwal
Collaborators: Simon Jenni and John Collomosse
Project Turntable
With Project Turntable, you can easily rotate 2D vector art in 3D and it’ll still look like 2D art from any new angle. Just click a button and drag a slider to spin your graphics around, much like manipulating a 3D object. The best part? Even after the rotation, the vector graphics stay true to the original shape, so you don’t lose any of the design’s essence.
Presenter: Zhiqin Chen
Collaborators: Matthew Fisher, Siddhartha Chaudhuri, Kartikey Mishra, Aditya Veer Singh, Sumit Dhingra, Vineet Batra, and Daichi Ito
Project Hi-Fi
Project Hi-Fi can be a game-changer for image creation. You can capture any part of your screen and use it as a guide to quickly create high-quality images with AI. These images can then be easily brought into Adobe Photoshop for further editing. By leveraging advanced models, Project Hi-Fi boosts both productivity and creativity, turning your screen content into detailed visuals with real-time AI technology. It’s a seamless way to transform your design concepts into polished images effortlessly.
Presenter: Veronica Peitong Chen
Collaborators: Zongze Wu, Simranjyot singh Gill, CJ Gammon
If you missed the big moments from Adobe MAX 2024, you can catch them here. And if you’re wondering what else is happening inside Adobe Research, check out our latest news here.