Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

The News: On July 26, Google, Microsoft, OpenAI, and Anthropic announced they have formed a new industry group, The Frontier Model Forum, with the intent of building and promoting best practices for the development of generative AI models. The group calls these models “frontier models”, though this is not necessarily a widely-accepted term yet. Other names associated with these models are foundational models or generative AI models.

The Forum’s objectives are to 1) advance AI safety research; 2) identify and promote best practices; and 3) collaborate with policymakers, academics, and other stakeholders to share knowledge about frontier model trust and safety risks.

The Forum will accept organizations as members if they meet certain criteria, the most important being that the organization develops and deploys frontier models (as defined by the Forum), they demonstrate strong commitment to frontier model safety, and are willing to contribute to joint Forum initiatives.

Essentially, the Forum will work on developing standards for frontier models with the hope of informing regulatory bodies and influencing regulation.

Read the full announcement on the Google blog.

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

Analyst Take: AI advocates have struggled for several years to coalesce and unify to form a definitive AI industry group. There is a good reason for this – AI is a broadly diffused technology, and it applies across a massive range of industry sectors and interests. It has taken a disruption moment in the form of generative AI to unify advocacy. The Frontier Model Forum has the potential to solidify AI advocacy. Here are the key takeaways about the potential impact of the Frontier Model Forum:

Open-ended, vague specifics on frontier models make sense. Governments should heed.

The Frontier Model Forum did not make any specific promises or suggestions about what they define as responsible development of frontier models, or what those risks, limitations, or impacts are to the public. There is a really good reason for that – it is very hard to build standards or laws on speculative outcomes. In these very early days of generative AI, there are few solidly commercially-viable use cases (code development is a potential exception). Lawmakers have been forced to address some of the impacts of generative AI without the benefit of commercial outputs. Regulations have to be broad enough to encompass unknown developments. With this in mind, there is a growing sentiment to incorporate flexibility into AI regulations and standards. In a recent position paper on the EU’s draft AI Act, the American Chamber of Commerce to the European Union suggested flexibility (noted from the executive summary): “ensure that requirements on providers of general purpose AI and foundation models are targeted, realistic and flexible”, “ensure a flexible and clear approach to standardisation .”

Potential rival industry groups for AI?

The Frontier Model Forum notes they “will also seek to build on the valuable work of existing industry, civil society, and research efforts across each of its workstreams. Initiatives such as the Partnership on AI and MLCommons continue to make important contributions across the AI community, and the Forum will explore ways to collaborate with and support these and other valuable multi-stakeholder efforts.” Google and Microsoft are members of the Partnership, but what if other advocacy groups emerge? When standards are being determined, some organizations find themselves at odds with other organizations. The telecommunications industry is a good example of where rival advocacy groups have battled (GSM vs. CDMA the most prominent, but even today multiple groups advocate for various standards – for instance, O-RAN alliance and Telecom Infra Project). Given the broad potential range of AI applications, there is a good chance other advocacy groups could form with different views. If they do, it could slow standards formation down.

Standards development is a very slow process. They require negotiations and complete agreement across a broad range of constituents. We should not expect the development of AI standards to be any different.

Guardrails for legacy responsible AI apply to frontier models and current laws can safeguard

For the past 3-4 years, forward-thinking AI advocates have been building best practices for responsible AI. In managing AI risk, enterprises must think about four key areas: privacy, bias/accuracy, security, and transparency. All of these principles apply, maybe doubly so, to generative AI applications. The Frontier Model Forum will likely adhere to those guardrails to build standards and advocate for regulation. Many current laws provide protections for AI risks, from privacy protection laws like GDPR, to copyright and IP infringement. It will be interesting to see where the Forum or other AI advocacy groups will fall about more specific regulations for AI in that regard.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

UK AI Regulations Criticized: A Cautionary Tale for AI Safety

The EU’s AI Act: Q3 2023 Update

Tech Giants and White House Join Forces on Safe AI Usage

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
Is PyTorch Europe's Rise a Turning Point for Open Source AI Leadership?
April 17, 2026

Is PyTorch Europe’s Rise a Turning Point for Open Source AI Leadership?

PyTorch Conference Europe 2026 drew 600+ AI leaders to Paris, showing open source AI's growing enterprise influence as organizations shift from proprietary solutions toward agentic AI and hybrid deployments....
Agentic AI or Pipeline AI for Code Reviews? Why the Architecture Decision Now Shapes Dev Velocity
April 17, 2026

Agentic AI or Pipeline AI for Code Reviews? Why the Architecture Decision Now Shapes Dev Velocity

Enterprise leaders face a critical decision: agentic AI versus pipeline AI for code reviews. Futurum Group's latest analysis reveals how this architectural choice directly impacts developer velocity, risk management, and...
Will Brave Origin Nightly's Rapid Release Model Set a New Standard for Browser Innovation?
April 17, 2026

Will Brave Origin Nightly’s Rapid Release Model Set a New Standard for Browser Innovation?

Brave Origin Nightly's aggressive update cycle challenges traditional browser development, prioritizing rapid feedback and security responses while raising stability and enterprise readiness concerns....
Can Brave Origin Nightly on Linux Shift Enterprise Browser Strategy?
April 17, 2026

Can Brave Origin Nightly on Linux Shift Enterprise Browser Strategy?

Brave Origin Nightly's expansion to Linux for both AMD/Intel and ARM architectures positions the browser as a credible enterprise alternative, challenging traditional standardization practices and supporting AI-era workloads....
Wayve's $60M Series D Extension: Can UK AI Autonomy Compete With US and China?
April 17, 2026

Wayve’s $60M Series D Extension: Can UK AI Autonomy Compete With US and China?

Wayve's $60M Series D from AMD, Arm, and Qualcomm signals backing for sovereign AI, but questions remain whether the UK startup can compete with better-capitalized US and Chinese rivals amid...
Can Cloudflare and Wiz Close the AI Security Visibility Gap?
April 17, 2026

Can Cloudflare and Wiz Close the AI Security Visibility Gap?

Fernando Montenegro, VP and Practice Lead, Cybersecurity at Futurum, how the Cloudflare-Wiz partnership integrates edge AI security with cloud risk mapping to close visibility gaps across enterprise AI endpoints....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.