
On an exceedingly cold day 30 years ago, Principal Scientist John Nelson decided to leave his job as a Denali mountain guide and head to something warmer.
With a background in computer science and fluency in Japanese, Nelson joined Adobe’s After Effects team, first working on localization, then porting tools from Mac to Windows, eventually bringing in many new features to meet users’ needs, and, ultimately, making his way to a role in Adobe Research.
Now Nelson is thinking about how the lessons he learned from the mountain shaped his approach to tech. He also talked with us about the magic that happens when research and product teams find the perfect fit for a new feature, how upcoming AI tools are giving video editors more control, and whether virtual reality might be ready for its second act.
You’ve said that your time as a mountain guide helped shape your work at Adobe. Can you tell us more?
Stepping out of your tent into total darkness, facing gale force winds, and starting up a steep frozen slope develops mental toughness and tenacity. On the mountain, you have objective hazards and sometimes you have to change your plan. And that’s how I see software development. You start out walking up the hill and there are things that are going to stop you. The trick is not to be stopped and to continue to move forward, understanding that eventually you’ll solve all the problems.
I think not being very risk-averse has served me well at Adobe—and eventually led me into research. You have to be fearless and confident to believe that you can continue to move things forward. For example, for my first experience shipping a product with AI, we used Senior Research Scientist Joon-Young Lee’s model to add machine learning-based propagation into the After Effects Roto Brush tool. There were challenges, but it was essential to push through so we could transform a tedious, time-consuming task into almost automatic masking for users. Now we’re on the fourth version of the AI segmentation propagation model, and we’ve recently added object selection for video to Premiere Pro.
I get to do this kind of work because my fellow Adobe Researchers are willing to be on the bleeding edge of technology.
You dedicate a lot of time to getting to know Adobe users and their needs. Why is that so important?
In my early days at Adobe, I had great mentors who made sure that I got a lot of face time with users, from going on customer visits to reading user blogs and forums. All of this helped me know what users really wanted, which is so important.
This was especially critical when I was coming up through the ranks on the After Effects team. Thirty years ago, we didn’t have designers and our product managers weren’t as deeply involved as they are now. So we had to find our own way. We knew that whatever new feature we were taking on, it would be better if all the people involved really knew the customer. And if there was ever a dispute about what we should be doing, the customer’s opinion was always the tiebreaker.
Even as the process has changed, being in touch with the customer is still the most important part of developing new features.

You’ve been on a product team and now you’re working with Adobe Research, so you have insights into how to connect research with its practical applications. What have you learned?
In my experience, researchers are, as I mentioned, the least risk-averse. They plow in without worrying about whether things break. But product teams are very risk-averse. They want to make sure that things are solid—and for good reason.
So, in a role like mine, you have to float above all of it. You need to understand the architecture of the products and know what research is becoming available. You have to understand the users’ needs and what researchers might be thinking about—because they might have the solution to a user problem they don’t even know about.
And you want to make sure you have a vision of what the product managers want and what the architects and engineers on the product team are experiencing because they have to maintain the code.
To do it all well, you need to understand the whole framework—and that’s really fun and exciting.
What’s one of your favorite examples of matching a research discovery with a user need?
The first one that comes to mind is content aware fill for video. I was on the After Effects team when I saw Research Engineer Geoff Oxholm present Project Cloak, a Max Sneak where they’d just done amazing new work on filling in holes in videos. So I came up with a workflow that would allow a user to poke a hole in their video and then use the technology to easily fill that hole. Now there are generative models that can do an even better job, but at the time this was something that no one else could do.
When you’re doing something no one else in the industry can do, that’s the magic.
How do you think the relationship between researchers and product teams is changing these days?
In the last five years, there’s been a lot of advancement in shared technology libraries. It used to be that researchers would build a prototype and then every product team would have to figure out how to get it into the product. Now, we have repositories of technology that bring researchers’ work to the doorstep of product. It’s really helped to grease the wheels of tech transfer.
Looking ahead, I expect research breakthroughs will continue to accelerate. But I think we’re engineering in such a way that we can plug in different technologies more easily now.
What are you working on now?
Right now, I’m working on making AI-based video editing more controllable. We’re bringing technologies from imaging and video and 3D together to start allowing workflows that let artists do new and unique things—all while staying in control of their generative tools. I’m excited about what’s happening there.

Looking ahead, what are some other technologies you’re following?
I don’t think we’ve seen immersive technology have its day yet. I think VR and AR are going to come back—I see an emergence of movies and gaming where you have a creative story in a context, and the viewer is able to control aspects of the story to suit their needs. For example, maybe you like a particular style of cinematography, and you want to apply it. I think it’s going to be a kind of mashup, and I think it’s going to be really interesting.
Outside of your work at Adobe, you launched a non-profit inspired by your experience as a mountain guide. Can you tell us more?
Yes, I founded Y.E.T.I. to provide outdoor programs for disadvantaged youth. The physical challenges that Y.E.T.I. facilitates help students develop mental toughness, and that can result in better academic outcomes in the classroom.

Wondering what else is happening inside Adobe Research? Check out our latest news here.