The News: On January 26, Andy Parsons, Senior Director for the Content Authenticity Initiative (CAI) at Adobe published a blog post reflecting on 2023, a “year of significant implementation and partnership momentum we’ve seen with content credentials across both the CAI and the foundational open standards consortium the Coalition for Content Provenance and Authenticity (C2PA). Our ongoing collaboration with leaders across these communities points to a shared sense of urgency—a collective commitment to enhance trust in digital content, particularly in the generative era with Content Credentials—a ‘nutrition label’ for digital content.”
Here are some highlights:
- 2,000 members across platforms and technologies, including cameras, smartphones, software, social media companies, news organizations, policy makers and content creators.
- Content Credentials now embedded in a range of Adobe products, including Photoshop and Lightroom, Illustrator, Express, Stock and Behance. Content Credentials available for select generative AI features including Generative Fill and text-to-image in Adobe Firefly.
- Sony committed to incorporating Content Credentials into its new Alpha 9 III line of cameras and Sony’s Alpha 1 and Alpha 7S III models via firmware updates. The company also collaborated with the Associated Press to conduct successful field-testing of this feature with photojournalists to provide authenticity throughout the news reporting process — from the point of capture through to editing and ultimately publishing.
- Microsoft introduced its use of Content Credentials to label all AI-generated images created with Bing Image Creator. Microsoft is also working to roll out Content Credentials capabilities in its graphic design application Microsoft Designer. More recently, Microsoft has also committed to helping candidates and campaigns maintain greater control over their content and likeness through its launch of Content Credentials as a service.
- In October 2023, Qualcomm announced its latest Snapdragon 8 Gen3 mobile platform that works with Truepic to support Content Credentials in camera systems, based on the global C2PA standard format.
You can read Parsons’ CAI update blog post here.
Interview with Andy Parsons
I spoke with Andy Parsons to explore content authenticity and traceability further. Efforts to identify the origin or provenance of digital content has come to the fore as we enter national elections in over 40 countries in 2024. Deepfakes and other forms of disinformation which spring from generative AI are driving all sorts of organizations to take action. Are certain technical solutions rising to the top? What are the real drivers for content traceability and transparency? Will any of this work? Adobe has invested significantly into content authenticity. Why is that?
Here is some of my conversation with Parsons and some of my takeaways.
Q: Where are we today in terms of the solutions needed for content transparency?
Parsons: There are three approaches that make up Content Credentials, and it will likely take all of them to have an effective and secure solution.
First there’s cryptographically signed (tamper-evident) metadata, which allows content creators to add extra information about themselves and their creative process directly to their content at export or download. We (Adobe) feel this is a must have. We also believe it has to be standardized or at the very least, interoperable. Preferably, there would be only one way to do it. Standardization is critical if there is going to be universal adoption of the practice.
A second approach is fingerprinting. One of the challenges with embedding credentials in metadata is that metadata can be stripped away from the asset. Content fingerprinting is part of the current Content Credentials specification. My colleague, John Collomosse (Principal Research Scientist at Adobe Research) said this about it in a blog post: “If there’s an image that’s out there that has gone through our pipeline but has had that metadata stripped away, we can fingerprint that image based on its pixels and match it back to Adobe’s cloud, where an authoritative copy of that provenance information has been stored at the time it was signed. Then we can match it and display that provenance information. The content credentials ‘stick to’ the image, no matter which platforms or tool chains it passes through. It’s an opt-in process to use the system.”
The third approach is digital watermarking. It’s early days for this type of work, and today it’s still fairly easy to remove them. The technology and approach for watermarking will likely evolve to something less vulnerable. Regardless, it can still help slow down misuse.
The bottom line is, Content Credentials combines these three solutions and is a better approach than simply using one of them on their own.
You can read more details about how Content Credentials work here.
Q: I’ve written recently that the market driver behind combating AI-generated deepfakes and other disinformation around the election is less about ethics and more about companies managing their AI risk. What do you think?
Parsons: I would like to think it’s both and that we all benefit from a driver where companies are managing their AI risk. Content creators have a lot at stake, our customers certainly do, one of the biggest issues for them in this sense is creator protection.
Q: Given there is significant motivation for companies to adopt content transparency and a few different solution approaches, do you think an ecosystem will emerge to help companies do this? Will content transparency goals spark some technical innovation?
Parsons: I’m not convinced the issue here is tech. The biggest issue is to drive towards universal adoption. With that in mind, content transparency has to be easy for any entity, creator, publisher, enterprise, organization, to implement. To that end, we believe an open-source approach is the best way to achieve widespread adoption because it’s more accessible and it’s free.
Analyst Take
Content transparency will continue to gain momentum throughout 2024, not only for the reasons already stated in combatting misinformation and protecting creator rights but perhaps even more importantly to protect creator copyright in the training of AI models. Adobe has already addressed this in its case, as Adobe models are trained only on data and content the company has rights to. “Fair use” arguments for AI model training will either be proved or disproved in courts around the world this year. Regardless, content creators will become increasingly motivated to protect content they have created.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other Insights from The Futurum Group:
Google Announces Strategies to Combat Misuse of AI In 2024 Elections
Author Information
Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.
Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.