
As a Senior Principal Scientist at Adobe Research, John Collomosse collaborates with the Content Authenticity Initiative (CAI), a team working to restore trust through digital content provenance. This spring, Collomosse brought his expertise to the prestigious Royal Society’s Pairing Scheme in Westminster, where leading UK scientists and engineers spent a week in parliament to share their technical expertise directly with key policymakers.
At the meeting, Collomosse was paired with officials from the Department for Science, Innovation, and Technology (DSIT) who are working to develop the UK government’s policy and technical capabilities for tackling misinformation and disinformation online. He met and presented to senior civil servants, UK Chief Scientific Advisers, as well as members of the House of Commons and the House of Lords, to communicate his team’s research in content provenance.
“The Royal Society has a motto: ‘Nullius in verba,’ which translates to ‘Take no one’s word for it,’” says Collomosse. “It reflects the Society’s longstanding commitment to science and reasoning grounded in trusted evidence. What better setting to share cutting-edge research with policymakers, especially when the topic is disinformation and restoring trust in digital information—one of the most pressing challenges of our time?”
Policymakers’ urgency around disinformation is growing, especially as AI makes it easier to manipulate and fabricate misleading images and videos. And, as Collomosse explains, technology experts have a vital role to play in helping government officials address the issue.
“There was a lot of discussion around how to counter deepfakes. But it’s impossible for a computer program to detect if something is synthetic and aiming to deceive someone. Instead, we shared the approach that both Adobe and the CAI take. We focus on tracing the provenance of content—who made it, how it was made, and its history—and disclosing that to users so they can make informed decisions about which content to trust,” explains Collomosse.
The tools Collomosse’s team has helped to develop, which offer content provenance, watermarking, and distributed ledgers, provide users with a verifiable digital history for digital content. That history makes it clear who created the content, when and where it was produced, and what edits have been made along the way.
Content provenance enables creators to add secure attribution to their work, helps people understand the things they see online, and it can also help address another key issue facing policymakers in the US and Europe: how to protect copyrights and attribution and allow creatives to decide whether their content can be used to train AI models.

How Adobe is making Content Credentials accessible and stickier—and protecting artists’ work online
As part of its mission to restore digital trust and transparency, Adobe established the Content Authenticity Initiative in 2019. Since then, the community to over 4,500 members who advocate for Content Credentials, the open technical standard that enable platforms to communicate the provenance of digital media, based on the C2PA (Coalition for Content Provenance and Authenticity).
As part of this work, Adobe launched a public beta of the free Adobe Content Authenticity web app at Adobe MAX in London. The app enables anyone, from professional creators to everyday users, to digitally sign their content and add verified identity through LinkedIn. This helps audiences distinguish authentic content from manipulated media and it makes it easier for artists to claim attribution. The app also allows artists to specify how their work can be used, including whether it can be used for AI.
As Collomosse explains, “The heart of the app is durable Content Credentials. They are built on an open standard for sharing provenance information, making that information more ‘sticky,’ meaning Credentials stay with an image or video everywhere it travels. This matters because social media platforms often strip away metadata.”
Why are Content Credentials on social media so important? As Collomosse puts it, “First, a lot of disinformation gets circulated on social platforms. Second, creatives share and promote their content on social media, but they want their information to stay with it including their authorship and consent, for example whether they permit AI training or not.”
Bringing science and policy together in the age of AI
The Royal Society’s Pairing Scheme exemplifies the importance of dialogue between scientists and policymakers. “As questions around disinformation, generative AI, and trust in media continue to grow in public importance, it’s more crucial than ever that experts work together to build evidence-based, future-ready responses,” Collomosse explains.
“Disinformation is a major societal challenge, and technology is only part of the answer to what is fundamentally a sociotechnical problem—we must continue to educate people about the tools we innovate, and work with governments to help build awareness and policy around them.”
Attending the Royal Society’s Pairing Scheme has Collomosse thinking about how to strengthen the bridge between science and policy.
“Many AI researchers are technical communicators, but being able to clearly communicate science in terms of its societal benefits is also important. Rather than explaining how the technology works, we can focus on explaining the art of the possible – to paint a picture of what’s possible today and what will be possible tomorrow to help inform policy,” says Collomosse. “The Royal Society Pairing Scheme was a fantastic opportunity to have these kinds of conversations and help policymakers in Westminster to connect with the work. As researchers, I think we could be having more of those conversations.”
Wondering what else is happening inside Adobe Research? Check out our latest news here.