Menu

Fox and Polygon Labs’ Verify Becomes the Latest Deepfake Solution

Fox and Polygon Labs’ Verify Becomes the Latest Deepfake Solution

The News: On January 9, Polygon announced that Fox Corporation will publicly release a beta version of Verify, an open source protocol meant to establish the history and origin of registered media. Verify is built on Polygon’s PoS protocol.

Here are the key details:

  • Publishers can register content on Verify to prove origination. Individual pieces of content are cryptographically signed onchain (Polygon PoS is based on blockchain technology) allowing consumers to identify content from trusted sources using the Verify tool.
  • Fox Corp launched a closed beta of Verify on August 23, coinciding with the first Fox News GOP debate. To date, 89,000 pieces of content, spanning text and images, have been signed to Verify, from Fox News, Fox Business, Fox Sports, and Fox TV affiliates.
  • The protocol source code is now open source.
  • Verify was developed in-house by Fox Technology and is built on the Polygon PoS protocol.
  • With this technology, readers will know for sure that an article or image that supposedly comes from a publisher in fact originated at the source.
  • Verify establishes a way for media companies to work with large language models (LLMs) and other AI platforms. Verified Access Point creates new commercial opportunities for content owners via “smart” contracts to set programmatic conditions for access to content.

Read the Polygon blog post, on Fox Verify-Polygon PoS here.

Fox and Polygon Labs’ Verify Becomes the Latest Deepfake Solution

Analyst Take: As we move into year two of generative AI, some themes have emerged in terms of the downsides to the technology. Two of the biggest downsides have been:

  • Combating malicious or misleading AI-generated content
  • Copyright/intellectual property (IP) rights for both non-AI-generated and AI-generated content

The Fox-Polygon Verify solution is one of the latest and most prominent attempts to address these issues. What will the impact of Verify be? What is next in terms of addressing copyright issues and combating malicious AI-generated content? Here are my thoughts.

Combating Malicious AI-Generated Content

There is a growing movement to combat deepfakes and other malicious content from staying in circulation, from digital/crypto watermarking and tagged metadata to now, ways to use blockchain technology. This is in addition to content filtering, which doesn’t necessarily discern AI content, just inappropriate content, leveraging both humans and AI (see Microsoft, Google). It will probably take all sorts of efforts and means to fight malicious AI-generated content. It will be interesting to see how Fox and others do with leveraging blockchain. The New York Times is using a similar approach through the News Provenance Project.

There may be a few drivers going on here for Fox. First, combating malicious AI-generated content can be a liability—with the potential for Fox media properties to be sued by consumers or organizations for various reasons. Second, while Verify is open source based, Fox might be able to parlay it into a revenue-generating ancillary services it could sell to other media companies.

Licensing Content, Copyright/IP Rights

Battling malicious AI-generated content is a noble cause, but at the heart of this project is an opportunity to license media content to AI vendors for training models or other purposes. In a TechCrunch article on Verify, Melody Hildebrandt, Fox’s CTO said: “Verify is also a technical on-ramp for AI platforms to license publisher content with encoded controls via smart contracts for LLM training or real-time use cases. We’re in discussion with several media companies and expect to be able to share more soon on that front.” In this approach, Fox not only protects its copyrighted content but will be selling that capability to other media companies.

Conclusion

Watermarking and building traceable content serves the dual drivers of combating malicious AI-generated content and protecting copyrighted content from being automatically used by AI vendors to train models or otherwise use the data without consent. While fair use arguments about content rights and AI use will play out in courts, The Futurum Group expects content providers will embrace technological watermarking and tracing solutions.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Google Announces Strategies to Combat Misuse of AI In 2024 Elections

Adults in the Generative AI Rumpus Room: AI Standards Hub, Google, Prompt Engineer Collective

Microsoft’s AI Safety Policies: Best Practice

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
CrowdStrike Q4 FY 2026 Earnings Extend ARR Scale and AI Security Focus
March 6, 2026

CrowdStrike Q4 FY 2026 Earnings Extend ARR Scale and AI Security Focus

Fernando Montenegro, VP Cybersecurity at Futurum, highlights CrowdStrike’s Q4 FY26 earnings: Falcon expands into AI security, identity, and browser runtime, underscoring consolidation-driven cybersecurity strategies....
S3NS & Sovereignty Can Thales-Google Venture Make AI Sovereignty Work at Scale
March 5, 2026

S3NS & Sovereignty: Can Thales-Google Venture Make AI Sovereignty Work at Scale?

Nick Patience, VP & Practice Lead for AI Platforms at Futurum Research, assesses S3NS’s progress following its SecNumCloud qualification, evaluates the sovereign AI roadmap, and examines what the Thales-Google Cloud...
Could Apple’s New $599 MacBook Neo Decimate The Mid-Range Windows Laptop Market
March 5, 2026

Could Apple’s New $599 MacBook Neo Decimate The Mid-Range Windows Laptop Market?

Olivier Blanchard, Analyst at Futurum, shares his insights on Apple's new $599 MacBook Neo. This breakthrough price point is set to disrupt the entire budget PC market and could be...
Elastic Q3 FY 2026 Strong Quarter, but Reacceleration Thesis Unproven
March 3, 2026

Elastic Q3 FY 2026: Strong Quarter, but Reacceleration Thesis Unproven

Nick Patience, VP and Practice Lead for AI Platforms at Futurum reviews Elastic Q3 FY 2026 earnings, highlighting sales-led subscription momentum, AI context engineering adoption, and agentic workflow expansion across...
CoreWeave Q4 FY 2025 Results Highlight Backlog Growth And Capacity Expansion
March 3, 2026

CoreWeave Q4 FY 2025 Results Highlight Backlog Growth And Capacity Expansion

Futurum Research reviews CoreWeave’s Q4 FY 2025 earnings, focusing on backlog-driven capacity expansion, platform monetization beyond GPUs, and execution cadence shaping AI infrastructure supply....
Snowflake Q4 FY 2026 Results Highlight AI-Led Consumption and Platform Expansion
March 2, 2026

Snowflake Q4 FY 2026 Results Highlight AI-Led Consumption and Platform Expansion

Brad Shimmin, Vice President & Practice Lead at Futurum analyzes Snowflake’s Q4 FY 2026 earnings, highlighting AI-driven consumption growth, expanding platform scope, and guidance shaping expectations for FY 2027....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.