Restoring Trust and Empowering Artists with Content Credentials

March 19, 2025

Tags: AI & Machine Learning, Careers, Computer Vision, Imaging & Video, Researcher Spotlights

Adobe Research Scientist Shruti Agarwal is on a mission to help fight misinformation in the digital world and give artists credit for their work. As a PhD student, she studied methods for detecting deepfakes. Now, as part of Adobe Research’s Content Authenticity team, she’s building tools that allow people to tap into the details of how a digital image was created and edited—so they can truly understand what they’re seeing.

Your research focuses on image forensics—especially understanding how images have been manipulated. How did you first get interested in this work?

As an undergrad, I really liked image analysis algorithms. Let’s say we wanted to find a cat or dog in an image. We’d write an algorithm to search for it, and then we’d visualize our results. From there, I was good at diagnosing what was wrong with an algorithm and going back to fix it. I just loved that iterative process.

When I joined my PhD program at Dartmouth College, I became interested in multimedia forensics through my advisor, Professor Hany Farid, who’s known as the father of digital forensics. From then on—at Dartmouth and then at UC Berkeley—instead of segmenting cats and dogs, I was segmenting manipulated or edited regions in images and analyzing them for forensics.

Can you tell us more about your PhD research into detecting deepfakes?

My research focused on passive forensics, so we would begin with an image or a video we didn’t know anything about. Then we’d analyze it by looking at the pixels or motion to figure out whether it had been manipulated and what had been changed.

At the time, we were seeing more and more instances of people’s faces and speech being edited, and it was becoming a concern. To address this, I started working on detecting deepfakes of politicians and world leaders by analyzing their soft biometric patterns, including facial recognition as well as how the face moves, to see whether they had been modified in videos.

For example, let’s say that when a particular person is happy, he gets dimples on his face. Or that he always has a particular head movement when he’s saying a certain word. We were recognizing these elements, which are not unique to a person among millions of people, but that are unique enough to tell the real person from a fake version of them.

We also analyzed things like phonemes. When you say the sounds P, B, or M, you really have to close your lips. But most of the deepfakes were having a hard time closing the lips for these phonemes, so that was an indicator that a video had been manipulated.

How are you bringing these skills to your work at Adobe Research?

Here at Adobe Research, my work is still related to the broader problem of misinformation. But the approach is different. Instead of calling an image “real” or “fake,” or being the arbiter of truth, we’re giving users the ability to verify and learn more about images for themselves.

For example, as a consumer, I really want to know which organization is bringing me an image and whether it’s a camera image or AI generated. If it’s a camera image, is it edited? If it’s AI, is it completely AI, or was AI just used to remove a little speck in the image without changing its meaning? We know that most images are edited in some way, and that doesn’t mean they’re necessarily fake or real—but it’s useful for users to know what’s been changed.

To give people that transparency, we’ve created “nutrition labels” for digital content, which we call Content Credentials. Content Credentials allow creators to embed information into an image, including how the image was made and edited. Then users can simply click the CR icon on an image to learn more.

Learn more about applying and inspecting Content Credentials here.

We also have the Adobe Content Authenticity web app, a tool that makes it easy for creators to attach Content Credentials to their work. Our Chrome browser plugin allows people to see credentials on websites or social platforms that don’t offer a built-in Content Credentials display.

How do you hope people will use Content Credentials?

We’re working with the Content Authenticity Initiative, a cross-industry community of more than 4,000 civil society, media, and technology companies that Adobe founded in 2019, to promote media transparency and the adoption of Content Credentials.

In an ideal world, as more people use Content Credentials, I imagine you’ll open a news article and see a pop-up box with information about an image. That will give users relief because they’ll actually know where an image is coming from.

From there, there could be a link to learn more, so you could go to a detailed website if you’re really interested in how the image was edited, who edited it, and so forth. Maybe there would be an AI assistant that would tell you the whole story about the image and let you ask questions.

For now, one of the most important things for Content Credentials is adoption. The more people see it, the more they’ll get used to it, and the faster this idea of transparency will become part of our daily workflows.

What have you been working on recently?

Over the last two years I’ve been working on a durable Content Credentials approach. Content Credentials are embedded as metadata in digital content, but metadata is very easily stripped away on many platforms and websites. So we’re providing a solution with watermarking and fingerprinting technology.

When we invisibly watermark an image, we can detect that watermark—even without a database lookup—and map it directly to the metadata that has been stripped. With this method, we can increase the use of Content Credentials, even on websites that strip metadata.

My research is also helping Content Credentials become durable as they travel beyond the internet and onto the physical world. For example, I presented a MAX Sneak for Project Know How, a technology that preserves Content Credentials for images that are printed on physical objects.

In the future, how could provenance be used beyond content authenticity?

One challenge artists face is getting credit when their work contributes to AI-generated images. And we’re finding that invisible watermarks survive, even after an image has been generated. So let’s say an artist has images in a training data set and they all have a certain watermark. Any image that’s generated using that artist’s style will also have that watermark. This is very important because it means artists would be able to receive credit whenever an AI-generated image has been inspired by their work. This could mean recognition in the community, or monetary compensation. Content Credentials also has a checkbox for artists who do not want AI models to train on their content.

You’re a boomerang—you worked as an engineer at Adobe in India, left to get your PhD, and then came back to join Adobe Research in San Jose. Can you tell us more about how you decided to follow this path?

Before I got my PhD, I was an engineer working on the Touch workspace in Adobe Illustrator, which was a very big thing at the time. I got to come to Adobe Summit, where I met Adobe Researchers showcasing their work. I clearly remember that all of them had PhDs, and they were working on really cool problems. I thought I should be doing a PhD as well.

While I was doing my postdoc at MIT, I got re-connected with Adobe and found out about the Content Authenticity Initiative. It was a perfect fit for me because they were tackling the same problems we are fighting in image forensics—but they were doing it for real world applications.

Now that I’m at Adobe Research, I’m so excited to help fight misinformation—I think it’s one of the most important problems our generation faces. My hope is that Content Credentials become so ubiquitous that content transparency becomes part of our daily lives.

Wondering what else is happening inside Adobe Research? Check out our latest news here.

Related Posts