Research Engineer Luis Figueroa teaches computers how to understand images so that he can help build new kinds of image editing tools. He first came to Adobe Research as a GEM Fellow back in 2019, and after completing his graduate work and two summers at Adobe, he joined the team full-time.
We talked to Figueroa about where his research interests began, the Adobe tools he’s helped shape, and his double-life as a musician.
How did you first get interested in computer vision?
I actually fell into the field accidentally. It all started when I was an intern at the NASA Jet Propulsion Laboratory the summer after my sophomore year in college. NASA has a lot of robots with infrared cameras, so we were working on automatically detecting things in those images, and extrapolating the parts of images that were hidden by things like clouds.
At the time, I was just getting into computer science, and I found it so cool that computers could perceive things in images. It really sparked a long-term interest.
Now that you’re a full-time research engineer in the field, what are you most excited about?
Right now, there’s an explosion of research around generative AI, but my interest is more in the analysis of images. If you have an image, how much can a computer tell you about it? For example, what can a computer infer about an interaction between people based on the context of an image? What makes a picture funny? What’s melancholy? Or happy? These are the types of big questions that really excite me.
Can you tell us about the work you’re doing at Adobe Research?
My first Adobe Research internship project was to build a new model for distractor detection and removal. The goal is to help users identify things they might want to remove from an image, and then make it easy to take them out.
We started by using machine learning and lots of data to build a computer vision model that could learn rules and patterns for identifying distractions. For example, the model learned that if there are people in the center of the image, they’re very likely to be the subjects of the photo. All the people behind, maybe they’re distracting. The same might go for people who are closer to the camera, but looking away.
We also have to consider the context of a scene. If you take a photo of the Eiffel Tower, even if there are a lot of people, the main subject of your photo is likely the tower. So our model needed to understand that contextual difference.
During my second internship, I focused on identifying shadows so we can allow users to move—or remove—an object and its shadow together. Once you move something in an image, you can apply hole-filling technology that synthesizes pixels to fill in the scene. But shadows are a special case because you can often see what’s underneath. So we developed technology that preserves and restores the textures under a shadow.
When I joined full-time 2021, I picked up some of this work, polishing and optimizing it.
How is your research impacting Adobe products and users?
So far, most of my work is on Project Stardust, a new object-centric, AI-powered image editing app. While other image-editing systems focus on pixels or global changes, Stardust is about editing at the level of objects in a scene.
So, while you could remove a distraction in Photoshop, the process requires a lot of expertise. In Stardust, we automatically suggest things that might be distractions, and you can remove them in seconds with the click of a button. Project Stardust was recently featured as an Adobe MAX Sneak.
You first came to Adobe as a GEM Fellow. Can you tell us about that experience?
The GEM community is super supportive in enabling minorities in STEM and research. And through GEM, Adobe reached out to me for a research position.
My GEM Fellowship was life-changing. At Adobe, I got to work with an incredible set of mentors who were very dedicated to my growth. My projects here shaped my master’s research and instilled in me a curiosity and passion for the field. By the time I was done with school, I couldn’t imagine myself being anywhere other than Adobe Research.
On top of everything else, you’re also a musician! Can you tell us about your music, and how it influences your research?
I love to create music, and I think the process is actually similar to engineering research. With music, there are established chord progressions and keys that, based on music theory, sound good. But to create interesting music, you need to explore new sounds and ideas. And that involves a lot of trial and error.
The same is true in research. You have established protocols, but sometimes the experiment isn’t listed in the research guidebook, so you have to be creative. It’s all about the search and trying new ideas.
What advice would you give to somebody who’s considering joining Adobe Research?
If you’re a fundamentally curious person who likes to learn and wants to be in an environment that pushes you and challenges you every day, and in every way possible, then Adobe Research is the place to be. I don’t think there’s been a day when I haven’t learned something new here.