Menu

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

The News: On July 26, Google, Microsoft, OpenAI, and Anthropic announced they have formed a new industry group, The Frontier Model Forum, with the intent of building and promoting best practices for the development of generative AI models. The group calls these models “frontier models”, though this is not necessarily a widely-accepted term yet. Other names associated with these models are foundational models or generative AI models.

The Forum’s objectives are to 1) advance AI safety research; 2) identify and promote best practices; and 3) collaborate with policymakers, academics, and other stakeholders to share knowledge about frontier model trust and safety risks.

The Forum will accept organizations as members if they meet certain criteria, the most important being that the organization develops and deploys frontier models (as defined by the Forum), they demonstrate strong commitment to frontier model safety, and are willing to contribute to joint Forum initiatives.

Essentially, the Forum will work on developing standards for frontier models with the hope of informing regulatory bodies and influencing regulation.

Read the full announcement on the Google blog.

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

Analyst Take: AI advocates have struggled for several years to coalesce and unify to form a definitive AI industry group. There is a good reason for this – AI is a broadly diffused technology, and it applies across a massive range of industry sectors and interests. It has taken a disruption moment in the form of generative AI to unify advocacy. The Frontier Model Forum has the potential to solidify AI advocacy. Here are the key takeaways about the potential impact of the Frontier Model Forum:

Open-ended, vague specifics on frontier models make sense. Governments should heed.

The Frontier Model Forum did not make any specific promises or suggestions about what they define as responsible development of frontier models, or what those risks, limitations, or impacts are to the public. There is a really good reason for that – it is very hard to build standards or laws on speculative outcomes. In these very early days of generative AI, there are few solidly commercially-viable use cases (code development is a potential exception). Lawmakers have been forced to address some of the impacts of generative AI without the benefit of commercial outputs. Regulations have to be broad enough to encompass unknown developments. With this in mind, there is a growing sentiment to incorporate flexibility into AI regulations and standards. In a recent position paper on the EU’s draft AI Act, the American Chamber of Commerce to the European Union suggested flexibility (noted from the executive summary): “ensure that requirements on providers of general purpose AI and foundation models are targeted, realistic and flexible”, “ensure a flexible and clear approach to standardisation .”

Potential rival industry groups for AI?

The Frontier Model Forum notes they “will also seek to build on the valuable work of existing industry, civil society, and research efforts across each of its workstreams. Initiatives such as the Partnership on AI and MLCommons continue to make important contributions across the AI community, and the Forum will explore ways to collaborate with and support these and other valuable multi-stakeholder efforts.” Google and Microsoft are members of the Partnership, but what if other advocacy groups emerge? When standards are being determined, some organizations find themselves at odds with other organizations. The telecommunications industry is a good example of where rival advocacy groups have battled (GSM vs. CDMA the most prominent, but even today multiple groups advocate for various standards – for instance, O-RAN alliance and Telecom Infra Project). Given the broad potential range of AI applications, there is a good chance other advocacy groups could form with different views. If they do, it could slow standards formation down.

Standards development is a very slow process. They require negotiations and complete agreement across a broad range of constituents. We should not expect the development of AI standards to be any different.

Guardrails for legacy responsible AI apply to frontier models and current laws can safeguard

For the past 3-4 years, forward-thinking AI advocates have been building best practices for responsible AI. In managing AI risk, enterprises must think about four key areas: privacy, bias/accuracy, security, and transparency. All of these principles apply, maybe doubly so, to generative AI applications. The Frontier Model Forum will likely adhere to those guardrails to build standards and advocate for regulation. Many current laws provide protections for AI risks, from privacy protection laws like GDPR, to copyright and IP infringement. It will be interesting to see where the Forum or other AI advocacy groups will fall about more specific regulations for AI in that regard.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

UK AI Regulations Criticized: A Cautionary Tale for AI Safety

The EU’s AI Act: Q3 2023 Update

Tech Giants and White House Join Forces on Safe AI Usage

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
Can Accenture and RELEX Deliver on the Promise of Unified AI Supply Chains for Retail Giants?
March 18, 2026

Can Accenture and RELEX Deliver on the Promise of Unified AI Supply Chains for Retail Giants?

Can Accenture and Microsoft’s FDE Bet Break the AI Scale Barrier for Enterprises?
March 18, 2026

Can Accenture and Microsoft’s FDE Bet Break the AI Scale Barrier for Enterprises?

NVIDIA Agent Toolkit
March 16, 2026

At GTC 2026, NVIDIA Stakes Its Claim on Autonomous Agent Infrastructure

Nick Patience and Mitch Ashley, analysts at Futurum, examine NVIDIA's Agent Toolkit announcements at GTC 2026, covering NemoClaw, AI-Q, the Nemotron Coalition, and what they mean for enterprise agentic AI...
Adobe Q1 FY 2026 Earnings Show AI Monetization Progress Amid CEO Transition
March 16, 2026

Adobe Q1 FY 2026 Earnings Show AI Monetization Progress Amid CEO Transition

Futurum Research analyzes Adobe’s Q1 FY 2026 earnings, focusing on AI usage-to-monetization signals, freemium-to-paid conversion dynamics, and enterprise CX momentum amid a CEO transition....
Yann LeCun’s AMI Raises $1BN Seed Round - Is the World Model Era Finally Here
March 13, 2026

Yann LeCun’s AMI Raises $1BN Seed Round – Is the World Model Era Finally Here?

Nick Patience, VP & AI Platforms Practice Lead at Futurum, examines AMI Labs' $1.03B seed round - Europe's largest - and what it means for the world model era, sovereign...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.