Menu

Microsoft’s AI Safety Policies: Best Practice

Microsoft’s AI Safety Policies: Best Practice

The News: On October 26, in response to a request from the UK government in advance of the UK AI Safety Summit for information about nine areas of AI practice and development, Microsoft provided an update to its AI Safety Policies. Key points include:

  • Microsoft collaborates closely with OpenAI on responsible capability scaling: “When it comes to frontier model deployment, Microsoft and OpenAI have together defined capability thresholds that act as a trigger to review models in advance of their first release or downstream deployment. The scope of a review, through our joint Microsoft-OpenAI Deployment Safety Board (DSB), includes model capability discovery.” Microsoft said the two companies “prepare detailed artifacts for the joint DSB review. Artefacts record the process by which our organizations have mapped, measured, and managed risks, including through the use of adversarial testing and third-party evaluations as appropriate.” Microsoft also checks the models independently of OpenAI as it applies to Microsoft’s use of the models. “As Microsoft, we also independently manage a subsequent safety review process. We evaluate model capability as deployed in a product – where additional safety mitigations can be implemented and measured for impact – to check for effective and appropriate mitigations prior to release.”
  • Microsoft develops new policies for model evaluations and red teaming: The company developed further internal practice guidance for AI Red Team, an expert group that is independent of Microsoft product-building teams. AI Red Team members are responsible for mapping, measuring, and managing the potential for harm and misuse of AI systems. Work includes simulating real-world attacks and exercising techniques that persistent threat actors might use, and includes practices to map risks outside of traditional security, including those associated with benign usage scenarios and responsible AI, such as “prompt injection attacks (where content submitted to the [large language model (LLM)] by the user or by a third party results in unintended actions), content harms (where malicious or benign usage of the system results in harmful or inappropriate AI-generated content), and privacy harms (where an LLM leaks correct or incorrect personal information about an individual), among others.” These developments led Microsoft to update internal practice guidance for its Security Development Lifecycle Threat Modeling requirement. The company is also building red teams made up of external, independent experts.

Read the full update to Microsoft’s AI Safety Policies here

Microsoft’s AI Safety Policies: Best Practice

Analyst Take: Microsoft’s update to its AI Safety Policies reveals how detailed and thorough Microsoft’s approach is to responsible AI. Given the company’s potential risk in rolling out AI within Microsoft products and services with an outside and relatively new business in OpenAI, the policies provide assurances in an unknown area and establish Microsoft as a creator of best practices for the responsible use of AI. Here are some of the impacts Microsoft’s work in AI safety will have.

Influencing the Right Standards

It is important to note that Microsoft has been heavily involved in the development of AI standards for several years. It is a common practice – most technology standards bodies are driven by the vendor community. The company has a significant stake in the process, but it also has the most significant expertise in the given technology.

In the case of AI standards, multiple bodies have been developing standards. This situation presents a challenge for vendors and enterprise users alike – which standards bodies to invest in? Which will have the greatest impact?

Microsoft invested time and energy into several AI standards bodies, including the one I think will be the most influential– the US National Institute of Standards and Technology (NIST). Their influence is not only important for policy development in the US but in the European Union (EU) and the UK as well. Note from the policy update: “The UK Government has requested information about nine areas of practice and investment, many of which relate to the voluntary commitments we published in July. We have indicated these points of connection at the beginning of each section, distinguishing between the White House Voluntary Commitments and the additional independent commitments we made as Microsoft … We also recognize that each of the nine areas of practice accrue to mapping, measuring, managing, and governing AI model development and deployment risk, the structure and terminology offered by the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF). To help provide context on how we are realizing our commitment to implement the NIST AI RMF, the terminology of ‘map, measure, manage, and govern’ is used throughout this response to the UK Government’s AI Safety Policies Request.”

The bottom line is that standards heavily influence policy. Microsoft is fully engaged in developing AI NIST standards, and NIST is likely to heavily influence AI policy development for the EU, UK, and US, and perhaps others as well.

Safeguards for Using Partner OpenAI

Many observers would say it is/was risky for Microsoft to partner with OpenAI, a startup that was built to further AI research with no track record with enterprise customers or enterprise-grade solutions. It is evident in Microsoft’s AI Safety Policies update that the company has built carefully thought-out guardrails for leveraging OpenAI’s admittedly ground-breaking intellectual property (IP). Make no mistake, though – Microsoft has too much to risk to misstep with the great unknowns of AI foundation models – the company is one of the most advanced and experienced AI players in the world, and because of its significant experience and expertise, Microsoft built a comprehensive set of AI safety policies to protect its customers and brand.

Building Best Practices for Curbing AI Risk

Microsoft is leveraging nearly 10 years of experience in working with AI and software security in its intricate approach to model evaluations and red teaming. The section called “Model Evaluations and Red Teaming” is really worth reading and understanding in detail for any enterprise thinking about its own journey in building AI safety policies.

Conclusions

Microsoft’s AI Safety Policies are assurance to the entire AI ecosystem about how AI is being leveraged for Microsoft products. They represent policies that have been developed by a company with some of the deepest AI experience and expertise in the world. The policies also look like an AI safety roadmap for enterprises looking to build their own AI systems. Finally, Microsoft’s AI safety policies and work are influencing the standards organization, NIST, that will inform AI safety policies across the globe.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Executive Order on AI: It Is Not Law

Two Trends in AI Regulations and a Look at Microsoft Copilot – The AI Moment, Episode 2

Mr. Benioff Goes to Washington

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Will a Digital Adoption Platform Become a Must-Have App in 2026?
December 15, 2025

Will a DAP Become the Must-Have Software App in 2026?

Keith Kirkpatrick, Research Director with Futurum, covers WalkMe’s 2025 Analyst Day, and discusses the company’s key pillars for driving success with enterprise software in an AI- and agentic-dominated world heading...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....
Oracle Q2 FY 2026 Cloud Grows; Capex Rises for AI Buildout
December 12, 2025

Oracle Q2 FY 2026: Cloud Grows; Capex Rises for AI Buildout

Futurum Research analyzes Oracle’s Q2 FY 2026 earnings, highlighting cloud infrastructure momentum, record RPO, rising AI-focused capex, and multicloud database traction driving workload growth across OCI and partner clouds....
Adobe Q4 FY 2025 Record Revenue, AI Adoption, ARR Targets
December 12, 2025

Adobe Q4 FY 2025: Record Revenue, AI Adoption, ARR Targets

Futurum Research analyzes Adobe’s Q4 FY 2025 results, emphasizing AI distribution via LLMs, enterprise adoption of Firefly Foundry, and a credit-based monetization model aligned to FY 2026 ARR growth and...
Five Key Reasons Why Confluent Is Strategic To IBM
December 9, 2025

Five Key Reasons Why Confluent Is Strategic To IBM

Brad Shimmin and Mitch Ashley at Futurum, share their insights on IBM’s $11B acquisition of Confluent. This bold move signals a strategic pivot, betting that real-time "data in motion" is...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.