From Animation to Audio to Augmented Reality: MAX Sneaks

November 20, 2019

Tags: AI & Machine Learning, AR, VR & 360 Photography, Audio, Computer Vision, Imaging & Video, Graphics (2D & 3D)

This year’s Adobe MAX conference in Los Angeles brought together more than 15,000 creatives. A major highlight of this annual event is a session called Sneaks, featuring Adobe Research technologists presenting experimental innovations on the main stage. Some of these tools are later incorporated into products used by millions of Adobe customers.

The 2019 Sneaks were co-hosted by Emmy Award-winning writer and comedian John Mulaney, and Adobe’s Paul Trani, senior Creative Cloud evangelist. Scientists and engineers had the chance to showcase their work live for the MAX audience and share it more widely with many more who watched the videos later. Eight Sneaks were presented by Adobe Research, and two more Sneaks included contributions by the Adobe Research team.

Check out the videos of this year’s MAX Sneaks to learn more about these early-stage technologies from our researchers and engineers. The Sneaks listed as “powered by Adobe Sensei” tap into AI and machine learning.

Project SoundSeek — Powered by Adobe Sensei

Content creators are constantly looking for specific sounds that occur multiple times in audio recordings. Wouldn’t it be great if you could simply select a couple of target sound examples, and have a machine find the rest? Boom.

Presenter: Justin Salamon
Internal Collaborators: Nick Bryan
External Collaborators: Yu Wang (New York University)

Project Sweet Talk — Powered by Adobe Sensei

From still to in-motion, Project Sweet Talk creates dynamic videos from static images. The possibilities are endless: animate drawings from centuries ago, your own sketches, 2D cartoon characters, Japanese mangas, stylized caricatures, and casual photos.

Presenter: Dingzeyu Li
Internal Collaborators: Jose Echevarria, Eli Shechtman
External Collaborators: Yang Zhou (University of Massachusetts, Amherst)

Project Pronto

Creating an augmented reality (AR) application today requires heavy technical expertise. Project Pronto combines the benefits of both video prototyping and AR authoring into a cohesive system that allows non-technical designers to rapidly express AR design ideas.

Presenter: Cuong Nguyen
Internal Collaborators: Paul Asente, Rubaiat Habib
External Collaborators: Germán Leiva (Aarhus University)

Project Go Figure— Powered by Adobe Sensei

The ability to track a person is critical for editors as they produce videos, but it’s not easy. Project Go Figure makes it possible to track using skeletons and contours, enabling smooth and robust tracking even in a crowded scene. This could simplify many creative workflows, including character animation and visual effects.

Presenter: Jimei Yang
Internal Collaborators: Duygu Ceylan, John Nelson, Victor Wang, Daichi Ito

Project Light Right — Powered by Adobe Sensei

Relighting outdoor videos and images has long remained beyond the reach of the casual photographer, especially when it comes to large-scale outdoor scenes, where controlling the lighting is close to impossible. Project Light Right quickly solves the challenge using 3D scene geometry and machine learning.

Presenter: Michaël Gharbi
External Collaborators: Julien Philip, George Drettakis (INRIA Sophia Antipolis)

Project Awesome Audio — Powered by Adobe Sensei

Project Awesome Audio enhances amateur audio recordings and turns them into professional-sounding recordings with the click of a button. The technology performs audio enhancements such as denoising, dereverbing, equalization, and environment matching all at once so that you can take your audio to the next level.

Presenter: Zeyu Jin
External Collaborators: Jiaqi Su (Princeton University)

Project Glowstick

Project Glowstick allows people to enrich their artwork by “painting with light” in Adobe Illustrator. The integration of simple 2D path tracers might just open up a whole new area of 2D illustrations.

Presenter: Jakub Fiser
Internal Collaborators: Marcos Slomp, Jun Saito

Project About Face — Powered by Adobe Sensei

The Face-aware Liquify tool in Photoshop automatically detects facial features and helps you adjust them to enhance a portrait or add creative character to a fun shot. Project About Face allows you to identify and undo these adjustments to revert back to the original image.

Presenter: Richard Zhang
Internal Collaborators: Oliver Wang
External Collaborators: Sheng-Yu Wang, Andrew Owens, Alexei A. Efros (UC Berkeley)

In addition, Adobe Research’s Zhe Lin contributed to Project All Inand Matt Fisher contributed to Project Fantastic Fonts. Learn more about the conference at the MAX website.

By Meredith Alexander Kunz, Adobe Research, with content from The Adobe Blog

Related Posts