Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

The News: On July 26, Google, Microsoft, OpenAI, and Anthropic announced they have formed a new industry group, The Frontier Model Forum, with the intent of building and promoting best practices for the development of generative AI models. The group calls these models “frontier models”, though this is not necessarily a widely-accepted term yet. Other names associated with these models are foundational models or generative AI models.

The Forum’s objectives are to 1) advance AI safety research; 2) identify and promote best practices; and 3) collaborate with policymakers, academics, and other stakeholders to share knowledge about frontier model trust and safety risks.

The Forum will accept organizations as members if they meet certain criteria, the most important being that the organization develops and deploys frontier models (as defined by the Forum), they demonstrate strong commitment to frontier model safety, and are willing to contribute to joint Forum initiatives.

Essentially, the Forum will work on developing standards for frontier models with the hope of informing regulatory bodies and influencing regulation.

Read the full announcement on the Google blog.

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

Analyst Take: AI advocates have struggled for several years to coalesce and unify to form a definitive AI industry group. There is a good reason for this – AI is a broadly diffused technology, and it applies across a massive range of industry sectors and interests. It has taken a disruption moment in the form of generative AI to unify advocacy. The Frontier Model Forum has the potential to solidify AI advocacy. Here are the key takeaways about the potential impact of the Frontier Model Forum:

Open-ended, vague specifics on frontier models make sense. Governments should heed.

The Frontier Model Forum did not make any specific promises or suggestions about what they define as responsible development of frontier models, or what those risks, limitations, or impacts are to the public. There is a really good reason for that – it is very hard to build standards or laws on speculative outcomes. In these very early days of generative AI, there are few solidly commercially-viable use cases (code development is a potential exception). Lawmakers have been forced to address some of the impacts of generative AI without the benefit of commercial outputs. Regulations have to be broad enough to encompass unknown developments. With this in mind, there is a growing sentiment to incorporate flexibility into AI regulations and standards. In a recent position paper on the EU’s draft AI Act, the American Chamber of Commerce to the European Union suggested flexibility (noted from the executive summary): “ensure that requirements on providers of general purpose AI and foundation models are targeted, realistic and flexible”, “ensure a flexible and clear approach to standardisation .”

Potential rival industry groups for AI?

The Frontier Model Forum notes they “will also seek to build on the valuable work of existing industry, civil society, and research efforts across each of its workstreams. Initiatives such as the Partnership on AI and MLCommons continue to make important contributions across the AI community, and the Forum will explore ways to collaborate with and support these and other valuable multi-stakeholder efforts.” Google and Microsoft are members of the Partnership, but what if other advocacy groups emerge? When standards are being determined, some organizations find themselves at odds with other organizations. The telecommunications industry is a good example of where rival advocacy groups have battled (GSM vs. CDMA the most prominent, but even today multiple groups advocate for various standards – for instance, O-RAN alliance and Telecom Infra Project). Given the broad potential range of AI applications, there is a good chance other advocacy groups could form with different views. If they do, it could slow standards formation down.

Standards development is a very slow process. They require negotiations and complete agreement across a broad range of constituents. We should not expect the development of AI standards to be any different.

Guardrails for legacy responsible AI apply to frontier models and current laws can safeguard

For the past 3-4 years, forward-thinking AI advocates have been building best practices for responsible AI. In managing AI risk, enterprises must think about four key areas: privacy, bias/accuracy, security, and transparency. All of these principles apply, maybe doubly so, to generative AI applications. The Frontier Model Forum will likely adhere to those guardrails to build standards and advocate for regulation. Many current laws provide protections for AI risks, from privacy protection laws like GDPR, to copyright and IP infringement. It will be interesting to see where the Forum or other AI advocacy groups will fall about more specific regulations for AI in that regard.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

UK AI Regulations Criticized: A Cautionary Tale for AI Safety

The EU’s AI Act: Q3 2023 Update

Tech Giants and White House Join Forces on Safe AI Usage

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Blurring the Traditional Boundaries Between Storage and Compute, VAST Data Hopes to Further Disrupt the Vector Database Market Through Its Unique Approach to Linear Scalability
Brad Shimmin and Stephen Foskett at The Futurum Group examine VAST Data's platform update, evaluating its AI-ready data capabilities and how those will disrupt the broader agentic AI marketplace.
The Proposed All-Cash Blockbuster Deal Has Ramifications Across Cloud Security, Security Operations, and More
The Futurum Group analysts Fernando Montenegro, Krista Case, Mitch Ashley, and Alex Smith share their insights on the blockbuster deal announced by Alphabet that it intends to acquire Wiz to improve multi-cloud security capabilities of Google Cloud.
MWC 2025 Produced a Tidal Wave of Announcements, With Cybersecurity Standing Out as a Consistent Priority Amid 5G, AI, Geopolitical, API, and RAN Concerns
Futurum’s Ron Westfall examines how cybersecurity portfolio development and marketing moves by key players Deutsche Telekom, Vodafone, IBM, Palo Alto, AWS, Nokia, and Microsoft will play a more integral role in the overall success of mobile networks, including 5G service innovation.
Anand Swaminathan, EVP & Global Industry Leader at Infosys, joins Patrick Moorhead to discuss the exciting future of AI and how Infosys is guiding clients through digital transformation.

Thank you, we received your request, a member of our team will be in contact with you.