Every year, MIT Technology Review creates a list of the top 35 innovators under 35 whose work is shaping the future of science and technology. Adobe is proud to announce that Senior Research Scientist Richard Zhang of Adobe Research was selected for this year’s list.
Zhang’s groundbreaking work focuses on how humans perceive visual images, and he uses this research to build more powerful creative generative AI models that empower people to express themselves visually. Some of the latest generative AI tools in Adobe’s industry-leading products – Adobe Firefly, Photoshop, and Lightroom – include Zhang’s contributions. The standalone version of Firefly enables users to create images, design text and re-imagine objects’ colors using simple text prompts.
Beyond visual creativity, Zhang is working to create a healthy ecosystem for generative AI. He has developed forensic tools to allow anyone to detect fake — and potentially malicious — images, along with tools to increase transparency about the data behind generated images. These kinds of tools are vital to the ethical use of AI systems, and Zhang’s work is already making a difference to detecting generated imagery in Adobe Stock.
Zhang joined Adobe Research full-time in 2018 after a 2017 internship. He earned his PhD in Electrical Engineering and Computer Sciences at the University of California, Berkeley, as well as Master and Bachelor of Science degrees at Cornell University. His co-authored research papers have been published in numerous top-tier conferences including SIGGRAPH, CVPR, ICCV, ECCV and NeurIPS.
Becoming one of MIT Technology Review’s 35 Innovators Under 35
MIT Technology Review’s Innovators Under 35 is an annual list of people who are changing the future of technology. The publication begins with more than 500 nominations, narrows to 100 promising candidates, and turns the list over to a panel of experts to select 35 innovators.
Past honorees include industry-defining tech executives, brilliant professors, and influential industry researchers—including Adobe Research Principal Scientist Aaron Hertzmann and former Adobe Research Scientist and Fellow Jovan Popovic. “I was surprised, honored, and proud to be part of such a prestigious group,” says Zhang. “And I’m so grateful to my labmates and mentors, who are world-class researchers across diverse areas of visual computing – and especially to the Adobe Research interns and university collaborators I work with.”
From human perception to cutting-edge generative AI tools
Zhang’s interest in generative AI first took hold in graduate school, at a time when the field of deep learning was just starting to take off. Back then, the technology was mostly being used to generate labels from images — for example, to allow a computer to identify an image of a cat and say, ‘this is a cat.’ Zhang was interested in turning the process around: “We wanted to reverse it and see if AI could generate an image, but we quickly realized that this is much more difficult. There are so many ways you can go wrong when you’re trying to create a whole image.”
This insight led Zhang to dig deeper into human perception, studying how humans see and understand images with the aim of helping computer models to more closely approximate human aesthetics. From there, Zhang and his colleagues began working on applications of their technology. “I knew we were onto something when we published a paper on colorization. Someone used the code to make a bot on Reddit that let people colorize their family photos. It got really popular, and we suddenly had this fun, wider influence in the community. This spark helped kick things off.” Since joining Adobe Research, Zhang’s work has been vital to a wide range of AI-powered creative tools. His efforts helped to pioneer generative AI for Adobe products beginning in 2019, including contributions to Colorization in Photoshop Elements and Photoshop Neural Filters; Landscape Mixer in Photoshop Neural Filters; the award-winning Enhance Super Resolution feature in Lightroom; and higher resolution/upsampling for Adobe Firefly images.
Detecting images that can fool us — and making AI more transparent
Increasingly powerful generative AI tools are enabling people to create whatever they can imagine — a huge leap for democratizing creativity. As this technology becomes widespread, Zhang also wants to make sure people can tell which images are real, and which aren’t.
“There’s a concern that we’ll have fake, malicious imagery all around us,” says Richard. “We anticipated this back in 2019. So we’ve been working on tools to help democratize forensics, so anyone will be able to tell if an image is real or synthesized – and understand the sourcing of images.”
Zhang is working to advance users’ ability to trace generated or manipulated images. Until recently, detecting altered images required significant expertise that was largely based in government agencies or research labs. But Zhang and his team have been developing a back-end forensics tool that’s easily accessible and can detect images that are either edited with conventional tools or created with AI.
“We hope the tools we’re developing will help people better understand the content they’re consuming,” says Zhang.
His work is already making a difference to Adobe customers. Adobe Stock is currently detecting generated imagery using a technology Zhang helped develop, enabling users to accurately identify work made with generative AI.
In addition to image forensics, Zhang hopes that understanding more about how images are generated can lead to more control for human artists. In work currently in the research stage, Zhang is studying how specific training data impacts a synthesized image. “This way, we’ll know which images made a new image possible,” says Zhang. “This could help us compensate contributors. And there’s also the field of machine unlearning — which means that we could give people the option to remove their data from a model.”
Zhang’s work complements technologies developed by the Content Authenticity Initiative, which Adobe and partners launched to develop open standards and tools that inform users about the provenance of media—where images come from, and how they’ve been altered. When provenance data is not available, Zhang and his colleagues’ work to detect manipulated imagery could be vital.
A vision for the future of generative AI
With this huge honor under his belt, Zhang is optimistic about the future of generative AI, and its potential to empower people to create and comprehend imagery.
“I hope we can build a healthy ecosystem around AI that serves everyone,” Zhang says. “We want generative AI to be accessible and controllable for people of all different skill levels. We also want to make sure these ecosystems are built on training data in a way that’s fair and equitable for contributors. And ultimately we want the general public to understand what they’re consuming.”
Learn more about Adobe’s Research organization by visiting our website, or follow us on X or Facebook!