Adobe Research helped power AI for every creator at MAX 2025

November 6, 2025

Tags: Adobe MAX Sneaks, AI & Machine Learning

At this year’s Adobe MAX, more than 10,000 creative professionals gathered for a glimpse at the latest Adobe innovations, including AI tools designed for every type of creator and built to give users more precise control than ever before. The work of Adobe Researchers was at the heart of many of the biggest reveals at the event, from new tools that automatically generate studio-quality soundtracks and voiceovers, to AI agents that support creators through ideation, creation, and production, to exciting sneak peeks at technology for rotating and editing 2D objects as if they were in 3D and instantly changing the light source or materials in any image.

Here’s a look at some of the exciting Adobe Research-powered announcements from Adobe MAX 2025.

New AI tools automatically generate studio-quality audio

Generate Soundtrack and Generate Speech, two new tools developed by Adobe Research in collaboration with product teams, were released in the Firefly app during Adobe MAX. They’re both now available in public beta.

Generate Soundtrack creates studio-quality music for video storytellers. The tool first analyzes a user’s short video, then composes audio clips that automatically synchronize with the video. Users can choose a musical style from pre-set options or use text prompts to describe the music they’d like to generate. Generate Soundtrack was trained on licensed data so it’s commercially safe, royalty-free, and cleared for any use. And the audio is exported with content credentials for transparency and attribution.

Generate Speech turns text into natural-sounding speech for videos and podcasts. Users choose from more than 50 voices and 20 languages. Once the speech is generated, users can adjust the pace, emphasis, and emotion, and even correct pronunciation as needed.

More innovations powered by Adobe Research

From the opening keynote through the entire event, the work of Adobe Research powered some of the biggest announcements at Adobe MAX this year. Here are a few highlights:

  • Creative professionals can now chat with the new AI Assistant in Photoshop (available in private beta in Photoshop Web) for help on creative tasks, personalized recommendations, and guidance through complex workflows. Users can seamlessly switch between conversations with the AI agent and using manual tools for precise, hands-on control.
  • Adobe Express users can now work with the Express AI Assistant, which is available in public beta. The Assistant helps users explore, create, and edit content using prompts. They can make quick changes to a design without impacting the parts they want to keep. And it’s easy to toggle the AI Assistant on or off for flexibility and control over how to create.
  • Project Moonlight coordinates AI Assistants across Adobe apps, bringing image, video, and photo editing together in harmony. Users tell Project Moonlight what they need, and it unites all of the AI Assistants into a creative team to bring the vision to life. Project Moonlight supports users during ideation and creation and is currently in private beta. Adobe users can sign up for the waitlist here.
  • Turntable, now in public beta in Adobe Illustrator, lets users rotate 2D artwork to see it from different angles. Users can quickly generate front, side, and back views of a character, product, or object in Illustrator without redrawing everything from scratch. This was a MAX Sneak last year and was quickly integrated into a product for creators to use in their workflows.
  • Generate Sound Effects in Premiere on Mobile lets users generate custom sound effects to match a video. Users can simply type a description of the sound they need or choose from suggested prompts. Users then add the sounds, such as chirping birds, explosions, or futuristic effects, directly into their timelines for precise storytelling.
  • Object Mask in Premiere Pro, now inpublic beta, automatically identifies objects and people in video footage using AI, isolates them with a single click, and tracks them throughout the shot. It was demoed in the MAX keynote here.
  • Other announcements and demos included: Semantic Audio Search in Premiere Pro, Media Intelligence (SearchCut and AdobeOne) in Premiere Pro, and RGBX and Image to 3D shown running Project Graph.

Sneak peeks give a preview of cutting-edge technology still in development

Sneaks are always a highlight of Adobe MAX. They give audiences a glimpse of Adobe’s in-progress creative experiments—including the big ideas and new tools that will help shape the future of Adobe products. This year, Emmy and Critics’ Choice-nominated comedian, writer, and actress Jessica Williams hosted as attendees got a peek at the future. From new ways to edit light and sound to tools for turning still photos into immersive 3D worlds, these were the Adobe Sneaks of 2025.

Project Surface Swap

Instantly change the look of any surface or material—from a sofa’s fabric to a wooden floor—right from a photo. Project Surface Swap uses AI-powered texture recognition to select and swap materials seamlessly, keeping lighting and perspective intact. It’s perfect for interior designers, photographers, and anyone who wants to visualize new possibilities before making real-world changes.   
Presenter: Valentin Deschaintre

Project Light Touch   

Lighting defines mood, but once a photo is shot, it’s traditionally fixed. Project Mood Light flips that idea. This generative AI tool lets you reshape light sources after capture—turning day to night, adding drama, or adjusting focus and emotion without reshoots. It’s like having total control over the sun and studio lights, all in post.   
Presenter: Zhixin Shu

Project Turn Style  

Move beyond flat images with Project Turn Style, a breakthrough that lets you edit 2D objects as if they were 3D. Rotate, reangle, or reposition elements within an image while maintaining their natural texture, lighting, and detail. The result: limitless creative control and dynamic storytelling.   
Presenter: Zhiqin Chen  

Project Trace Erase   

Say goodbye to clunky object removal. Project Trace Erase doesn’t just erase—it understands. Powered by diffusion transformer models, it removes objects and their shadows, reflections, and environmental distortions, delivering perfectly natural, context-aware edits with almost no manual cleanup.   
Presenter: Lingzhi Zhang

Project New Depths  

Step into the world of 3D photography. Project New Depths brings intuitive editing tools to “radiance fields” (3D photos), allowing artists to tweak color, shape, and composition in three-dimensional space. It makes editing depth as easy as adjusting brightness—ushering in the next generation of visual storytelling.   
Presenter: Élie Michel

Project Scene It  

Project Scene It blends precision and artistry by letting creators control both the structure and style of 3D scenes. Built on Image-to-3D and 3D-to-Image technologies, it allows tagging of individual objects with reference images, preserving each object’s unique look while freely moving it in 3D space. It’s a new frontier for designing vivid, lifelike worlds.   
Presenter: Oindrila Saha

Project Frame Forward  

Forget frame-by-frame video editing. Project Frame Forward applies changes across entire videos based on one annotated frame and a simple text prompt. It brings the precision of photo editing to video, dramatically speeding up production without sacrificing quality.   
Presenter: Jui-hsien Wang

Project Motion Map  

Bring your illustrations to life. Project Motion Map uses AI to analyze static vector graphics and automatically animate them in ways that feel intentional and expressive—no keyframes or manual rigging required. The result is effortless motion design that still reflects the creator’s intent.   
Presenter: Mohit Goel

Project Sound Stager   

Sound is emotion, and Project Sound Stager helps creators design it like never before. By analyzing a video’s visuals, pacing, and emotional tone, it automatically generates layered soundscapes using expert sound design logic. You can even collaborate conversationally with an AI “sound designer” to tweak the final mix.   
Presenter: Oriol Nieto

Project Clean Take  

Editing dialogue just got smoother. Project Clean Take uses AI to correct mispronunciations, isolate voices, remove noise, and refine delivery—all in seconds. It’s a powerful assistant for podcasters, filmmakers, and anyone seeking studio-quality sound without the studio.   
Presenter: Lee Brimelow

You can discover even more from Adobe MAX here. And if you’re wondering what else is happening inside Adobe Research, check out our latest news here! 

Recent Posts