Generative AI is changing how people create—and it’s revolutionizing how researchers build new creative technologies. So we asked David Tompkins, Senior Principal Scientist and Lab Director at Adobe Research, to give us a behind-the-scenes look at how research engineers are helping build a super highway between generative AI research and Adobe’s products.
You’ve said that the move toward machine learning (ML) and AI in research is a paradigm shift. Can you tell us about what’s changing?
We’ve all seen some of the things generative AI can do when it comes to creating and editing images with words, rather than designing and editing pixel-by-pixel. Along with these new tools, there’s also a sea change in how we build software.
For decades software researchers implemented things as algorithms—a set of commands that a program follows to do something. If you were editing an image in Photoshop, there was an algorithm that operated on the pixels and produced the output. Now, with this paradigm shift we’ve replaced the algorithm’s step-by-step instructions with an ML model that’s been trained to produce a creative output, whether it’s filling in the space after you’ve removed something from a photo or creating an entirely new image. This is a sweeping change across Adobe products, and across the software industry in general.
In the research world, this change means that a much greater percentage of our portfolio is ML projects—and so we’re working differently. Five years ago, you’d see a lot of projects with an individual researcher, but now projects are fundamentally bigger, so we’re working in larger groups with much more collaboration among researchers and engineers. It’s like we’ve shifted from building automobiles to airplanes—there are a lot more moving parts when you build an ML model. For users, the benefit is much more powerful creative tools.
With all of these changes, what kinds of skills do researchers need now?
With ML and AI projects, handling data is an essential skill. In the early days, researchers were learning how to create and improve their data sets through experimentation. You could start with a small data set and if you realized it wasn’t quite right, you’d process it and look at how the changes influenced your ML models. Now all of that has scaled up—instead of thousands of training assets in a data set, there are millions or billions, so you can’t look at them all and change the dataset in a manual way. Researchers need to know how to assess and improve data at scale, and it is essential to understand how to load training data efficiently. These skill sets are emerging alongside the paradigm shift, and researchers are sharing their techniques as they learn.
Has generative AI changed the way you think about users’ needs when you’re imagining new tools?
Even though the technology has changed, and the way we develop new tools has changed, we still have the same things in mind when we think about Adobe customers and what they need. Our product teams work closely with users to understand what they do with our products—and what they want to do—and then we develop technology to make those things easier or faster.
We’re also focused on ethical AI, which means a lot of things, including making sure that our technology is helping people and giving them what they need, from better workflows to AI tools that become their creative co-pilots. It’s also important to consider safety-by-design, which means setting safety standards with our generative AI data sets at the beginning, and making sure we have measures in place all the way through to the outputs at the end.
You and your team recently worked on Project Stardust, which was one of the really exciting sneaks at Adobe MAX this past year. Stardust uses AI to enable a new kind of image editing. Can you tell us about the project?
About three years ago, two of our researchers, Jon Brandt and Scott Cohen, had the vision for Project Stardust. They wanted to reinvent image editing with machine learning and AI. This was back before the public emergence of the new generative AI models. The initial focus was to better understand the content in images so we could streamline time-consuming processes. The team started by building ML models that could detect objects, so that Stardust could do things like automatically segmenting an object to move it. As the project evolved, and new technology emerged, the team integrated generative techniques as well. So now you can do things like complete an object that’s hidden behind another object or automatically fill in the space when you remove an object. These were things we didn’t even imagine at the start of the project, but advances in our AI technology allowed us to build some really magical capabilities.
You’ve mentioned that researchers are building faster generative AI models. In the short term, how do you think that will change what users can do?
If you think about Firefly, Adobe’s generative image technology, when you type in your text, it takes a few seconds to get a response. This is industry-standard at the moment. But imagine having your image pop up in half a second. Imagine it being so fast that it can keep up with you typing. So, you’re creating a big run-on sentence, like ‘I want a cat on a motorcycle with a top hat and candy canes all around and it’s snowing, no it’s raining…” And as you type, the image is truly morphing in front of your eyes. That’s something we’re really excited about.
What are some other trends you’re paying attention to right now?
There’s an area of machine learning called reinforcement learning that I find very interesting. I think it will become a very useful companion technology to generative AI because it gives you a way to learn based on experience. As you watch how a human uses a generative model, reinforcement learning allows you to understand that behavior and learn how to make suggestions, optimizations, and improvements that can enrich the overall experience.
Inside the world of research engineering, I’m also paying attention to model distillation, which is about making the performance of generative models faster. It saves a lot of money and it’s an essential part of making the generative models in our products more efficient. It’s something we’re excited to explore and understand.
Wondering what else is happening inside Adobe Research? Check out our latest news here.