Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

The News: On July 26, Google, Microsoft, OpenAI, and Anthropic announced they have formed a new industry group, The Frontier Model Forum, with the intent of building and promoting best practices for the development of generative AI models. The group calls these models “frontier models”, though this is not necessarily a widely-accepted term yet. Other names associated with these models are foundational models or generative AI models.

The Forum’s objectives are to 1) advance AI safety research; 2) identify and promote best practices; and 3) collaborate with policymakers, academics, and other stakeholders to share knowledge about frontier model trust and safety risks.

The Forum will accept organizations as members if they meet certain criteria, the most important being that the organization develops and deploys frontier models (as defined by the Forum), they demonstrate strong commitment to frontier model safety, and are willing to contribute to joint Forum initiatives.

Essentially, the Forum will work on developing standards for frontier models with the hope of informing regulatory bodies and influencing regulation.

Read the full announcement on the Google blog.

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

Analyst Take: AI advocates have struggled for several years to coalesce and unify to form a definitive AI industry group. There is a good reason for this – AI is a broadly diffused technology, and it applies across a massive range of industry sectors and interests. It has taken a disruption moment in the form of generative AI to unify advocacy. The Frontier Model Forum has the potential to solidify AI advocacy. Here are the key takeaways about the potential impact of the Frontier Model Forum:

Open-ended, vague specifics on frontier models make sense. Governments should heed.

The Frontier Model Forum did not make any specific promises or suggestions about what they define as responsible development of frontier models, or what those risks, limitations, or impacts are to the public. There is a really good reason for that – it is very hard to build standards or laws on speculative outcomes. In these very early days of generative AI, there are few solidly commercially-viable use cases (code development is a potential exception). Lawmakers have been forced to address some of the impacts of generative AI without the benefit of commercial outputs. Regulations have to be broad enough to encompass unknown developments. With this in mind, there is a growing sentiment to incorporate flexibility into AI regulations and standards. In a recent position paper on the EU’s draft AI Act, the American Chamber of Commerce to the European Union suggested flexibility (noted from the executive summary): “ensure that requirements on providers of general purpose AI and foundation models are targeted, realistic and flexible”, “ensure a flexible and clear approach to standardisation .”

Potential rival industry groups for AI?

The Frontier Model Forum notes they “will also seek to build on the valuable work of existing industry, civil society, and research efforts across each of its workstreams. Initiatives such as the Partnership on AI and MLCommons continue to make important contributions across the AI community, and the Forum will explore ways to collaborate with and support these and other valuable multi-stakeholder efforts.” Google and Microsoft are members of the Partnership, but what if other advocacy groups emerge? When standards are being determined, some organizations find themselves at odds with other organizations. The telecommunications industry is a good example of where rival advocacy groups have battled (GSM vs. CDMA the most prominent, but even today multiple groups advocate for various standards – for instance, O-RAN alliance and Telecom Infra Project). Given the broad potential range of AI applications, there is a good chance other advocacy groups could form with different views. If they do, it could slow standards formation down.

Standards development is a very slow process. They require negotiations and complete agreement across a broad range of constituents. We should not expect the development of AI standards to be any different.

Guardrails for legacy responsible AI apply to frontier models and current laws can safeguard

For the past 3-4 years, forward-thinking AI advocates have been building best practices for responsible AI. In managing AI risk, enterprises must think about four key areas: privacy, bias/accuracy, security, and transparency. All of these principles apply, maybe doubly so, to generative AI applications. The Frontier Model Forum will likely adhere to those guardrails to build standards and advocate for regulation. Many current laws provide protections for AI risks, from privacy protection laws like GDPR, to copyright and IP infringement. It will be interesting to see where the Forum or other AI advocacy groups will fall about more specific regulations for AI in that regard.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

UK AI Regulations Criticized: A Cautionary Tale for AI Safety

The EU’s AI Act: Q3 2023 Update

Tech Giants and White House Join Forces on Safe AI Usage

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
Qualcomm’s Snapdragon Wear Elite Redefines the AI Wearable Stakes—But Who Wins the Wrist War?
April 22, 2026

Qualcomm’s Snapdragon Wear Elite Redefines the AI Wearable Stakes—But Who Wins the Wrist War?

Qualcomm's Snapdragon Wear Elite marks a turning point in wearable AI, delivering a dedicated neural processing unit for on-device intelligence, privacy, and real-time voice interactions—positioning the company against Apple and...
VAST Data Valuation Triples. Can a Unified Platform Scale AI Globally?
April 22, 2026

VAST Data Valuation Triples. Can a Unified Platform Scale AI Globally?

Brad Shimmin, Vice President & Practice Lead at Futurum, analyzes VAST Data valuation and its AI operating system strategy, questioning whether unified infrastructure can scale amid persistent market fragmentation....
Cerebras S-1 Teardown: Is the $23B Wafer-Scale IPO the End of GPU Homogeneity?
April 22, 2026

Cerebras S-1 Teardown: Is the $23B Wafer-Scale IPO the End of GPU Homogeneity?

Brendan Burke, Research Director at Futurum, examines Cerebras Systems' S-1 filing and $23B valuation, dissecting the $20B OpenAI deal, 86% UAE revenue concentration, and whether wafer-scale silicon can survive the...
Can CLEAR’s Q1 2026 Results Prove Identity Tech Is More Than a Travel Niche?
April 22, 2026

Can CLEAR’s Q1 2026 Results Prove Identity Tech Is More Than a Travel Niche?

CLEAR's Q1 2026 earnings announcement on May 6 will demonstrate whether its Identity Platform expansion into healthcare and enterprise markets can deliver sustainable growth beyond airport security operations....
Free Notification Sound Effects: Are Royalty-Free SFX the Next Enterprise UX Edge?
April 22, 2026

Free Notification Sound Effects: Are Royalty-Free SFX the Next Enterprise UX Edge?

ElevenLabs' new free royalty-free SFX offering removes licensing barriers for enterprise audio branding. As digital products compete for user attention, professional-grade notification sounds become a strategic UX differentiator....
Free Notification SFX: Does High-Quality Audio Democratize Digital Experience?
April 22, 2026

Free Notification SFX: Does High-Quality Audio Democratize Digital Experience?

ElevenLabs democratizes audio creation with free, high-quality notification sound effects for developers and creators. This strategic move lowers barriers to professional sound design while reshaping the competitive landscape for SFX...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.