The News: On January 9, Polygon announced that Fox Corporation will publicly release a beta version of Verify, an open source protocol meant to establish the history and origin of registered media. Verify is built on Polygon’s PoS protocol.
Here are the key details:
- Publishers can register content on Verify to prove origination. Individual pieces of content are cryptographically signed onchain (Polygon PoS is based on blockchain technology) allowing consumers to identify content from trusted sources using the Verify tool.
- Fox Corp launched a closed beta of Verify on August 23, coinciding with the first Fox News GOP debate. To date, 89,000 pieces of content, spanning text and images, have been signed to Verify, from Fox News, Fox Business, Fox Sports, and Fox TV affiliates.
- The protocol source code is now open source.
- Verify was developed in-house by Fox Technology and is built on the Polygon PoS protocol.
- With this technology, readers will know for sure that an article or image that supposedly comes from a publisher in fact originated at the source.
- Verify establishes a way for media companies to work with large language models (LLMs) and other AI platforms. Verified Access Point creates new commercial opportunities for content owners via “smart” contracts to set programmatic conditions for access to content.
Read the Polygon blog post, on Fox Verify-Polygon PoS here.
Fox and Polygon Labs’ Verify Becomes the Latest Deepfake Solution
Analyst Take: As we move into year two of generative AI, some themes have emerged in terms of the downsides to the technology. Two of the biggest downsides have been:
- Combating malicious or misleading AI-generated content
- Copyright/intellectual property (IP) rights for both non-AI-generated and AI-generated content
The Fox-Polygon Verify solution is one of the latest and most prominent attempts to address these issues. What will the impact of Verify be? What is next in terms of addressing copyright issues and combating malicious AI-generated content? Here are my thoughts.
Combating Malicious AI-Generated Content
There is a growing movement to combat deepfakes and other malicious content from staying in circulation, from digital/crypto watermarking and tagged metadata to now, ways to use blockchain technology. This is in addition to content filtering, which doesn’t necessarily discern AI content, just inappropriate content, leveraging both humans and AI (see Microsoft, Google). It will probably take all sorts of efforts and means to fight malicious AI-generated content. It will be interesting to see how Fox and others do with leveraging blockchain. The New York Times is using a similar approach through the News Provenance Project.
There may be a few drivers going on here for Fox. First, combating malicious AI-generated content can be a liability—with the potential for Fox media properties to be sued by consumers or organizations for various reasons. Second, while Verify is open source based, Fox might be able to parlay it into a revenue-generating ancillary services it could sell to other media companies.
Licensing Content, Copyright/IP Rights
Battling malicious AI-generated content is a noble cause, but at the heart of this project is an opportunity to license media content to AI vendors for training models or other purposes. In a TechCrunch article on Verify, Melody Hildebrandt, Fox’s CTO said: “Verify is also a technical on-ramp for AI platforms to license publisher content with encoded controls via smart contracts for LLM training or real-time use cases. We’re in discussion with several media companies and expect to be able to share more soon on that front.” In this approach, Fox not only protects its copyrighted content but will be selling that capability to other media companies.
Conclusion
Watermarking and building traceable content serves the dual drivers of combating malicious AI-generated content and protecting copyrighted content from being automatically used by AI vendors to train models or otherwise use the data without consent. While fair use arguments about content rights and AI use will play out in courts, The Futurum Group expects content providers will embrace technological watermarking and tracing solutions.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other Insights from The Futurum Group:
Google Announces Strategies to Combat Misuse of AI In 2024 Elections
Adults in the Generative AI Rumpus Room: AI Standards Hub, Google, Prompt Engineer Collective
Microsoft’s AI Safety Policies: Best Practice
Author Information
Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.
Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.