Search
Close this search box.

Microsoft’s AI Safety Policies: Best Practice

Microsoft’s AI Safety Policies: Best Practice

The News: On October 26, in response to a request from the UK government in advance of the UK AI Safety Summit for information about nine areas of AI practice and development, Microsoft provided an update to its AI Safety Policies. Key points include:

  • Microsoft collaborates closely with OpenAI on responsible capability scaling: “When it comes to frontier model deployment, Microsoft and OpenAI have together defined capability thresholds that act as a trigger to review models in advance of their first release or downstream deployment. The scope of a review, through our joint Microsoft-OpenAI Deployment Safety Board (DSB), includes model capability discovery.” Microsoft said the two companies “prepare detailed artifacts for the joint DSB review. Artefacts record the process by which our organizations have mapped, measured, and managed risks, including through the use of adversarial testing and third-party evaluations as appropriate.” Microsoft also checks the models independently of OpenAI as it applies to Microsoft’s use of the models. “As Microsoft, we also independently manage a subsequent safety review process. We evaluate model capability as deployed in a product – where additional safety mitigations can be implemented and measured for impact – to check for effective and appropriate mitigations prior to release.”
  • Microsoft develops new policies for model evaluations and red teaming: The company developed further internal practice guidance for AI Red Team, an expert group that is independent of Microsoft product-building teams. AI Red Team members are responsible for mapping, measuring, and managing the potential for harm and misuse of AI systems. Work includes simulating real-world attacks and exercising techniques that persistent threat actors might use, and includes practices to map risks outside of traditional security, including those associated with benign usage scenarios and responsible AI, such as “prompt injection attacks (where content submitted to the [large language model (LLM)] by the user or by a third party results in unintended actions), content harms (where malicious or benign usage of the system results in harmful or inappropriate AI-generated content), and privacy harms (where an LLM leaks correct or incorrect personal information about an individual), among others.” These developments led Microsoft to update internal practice guidance for its Security Development Lifecycle Threat Modeling requirement. The company is also building red teams made up of external, independent experts.

Read the full update to Microsoft’s AI Safety Policies here

Microsoft’s AI Safety Policies: Best Practice

Analyst Take: Microsoft’s update to its AI Safety Policies reveals how detailed and thorough Microsoft’s approach is to responsible AI. Given the company’s potential risk in rolling out AI within Microsoft products and services with an outside and relatively new business in OpenAI, the policies provide assurances in an unknown area and establish Microsoft as a creator of best practices for the responsible use of AI. Here are some of the impacts Microsoft’s work in AI safety will have.

Influencing the Right Standards

It is important to note that Microsoft has been heavily involved in the development of AI standards for several years. It is a common practice – most technology standards bodies are driven by the vendor community. The company has a significant stake in the process, but it also has the most significant expertise in the given technology.

In the case of AI standards, multiple bodies have been developing standards. This situation presents a challenge for vendors and enterprise users alike – which standards bodies to invest in? Which will have the greatest impact?

Microsoft invested time and energy into several AI standards bodies, including the one I think will be the most influential– the US National Institute of Standards and Technology (NIST). Their influence is not only important for policy development in the US but in the European Union (EU) and the UK as well. Note from the policy update: “The UK Government has requested information about nine areas of practice and investment, many of which relate to the voluntary commitments we published in July. We have indicated these points of connection at the beginning of each section, distinguishing between the White House Voluntary Commitments and the additional independent commitments we made as Microsoft … We also recognize that each of the nine areas of practice accrue to mapping, measuring, managing, and governing AI model development and deployment risk, the structure and terminology offered by the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF). To help provide context on how we are realizing our commitment to implement the NIST AI RMF, the terminology of ‘map, measure, manage, and govern’ is used throughout this response to the UK Government’s AI Safety Policies Request.”

The bottom line is that standards heavily influence policy. Microsoft is fully engaged in developing AI NIST standards, and NIST is likely to heavily influence AI policy development for the EU, UK, and US, and perhaps others as well.

Safeguards for Using Partner OpenAI

Many observers would say it is/was risky for Microsoft to partner with OpenAI, a startup that was built to further AI research with no track record with enterprise customers or enterprise-grade solutions. It is evident in Microsoft’s AI Safety Policies update that the company has built carefully thought-out guardrails for leveraging OpenAI’s admittedly ground-breaking intellectual property (IP). Make no mistake, though – Microsoft has too much to risk to misstep with the great unknowns of AI foundation models – the company is one of the most advanced and experienced AI players in the world, and because of its significant experience and expertise, Microsoft built a comprehensive set of AI safety policies to protect its customers and brand.

Building Best Practices for Curbing AI Risk

Microsoft is leveraging nearly 10 years of experience in working with AI and software security in its intricate approach to model evaluations and red teaming. The section called “Model Evaluations and Red Teaming” is really worth reading and understanding in detail for any enterprise thinking about its own journey in building AI safety policies.

Conclusions

Microsoft’s AI Safety Policies are assurance to the entire AI ecosystem about how AI is being leveraged for Microsoft products. They represent policies that have been developed by a company with some of the deepest AI experience and expertise in the world. The policies also look like an AI safety roadmap for enterprises looking to build their own AI systems. Finally, Microsoft’s AI safety policies and work are influencing the standards organization, NIST, that will inform AI safety policies across the globe.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Executive Order on AI: It Is Not Law

Two Trends in AI Regulations and a Look at Microsoft Copilot – The AI Moment, Episode 2

Mr. Benioff Goes to Washington

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Urmila Kukreja, Director at Smartsheet, shares her insights on leveraging General AI to redefine collaborative work management, transforming how businesses operate efficiently.
Praerit Garg, President of Product and Innovation at Smartsheet, joins David Nicholson to share his insights on driving innovation at Smartsheet and how they prioritize customer feedback in shaping product development.
Miya McClain, VP at Smartsheet, shares her insights on the enhanced Smartsheet user experience, highlighting the role of GenAI and exciting new features that promise to keep Smartsheet at the forefront of CWM technology.
Todd Lewellen and Megan Amdah offer CIOs guidance on AI PCs, including the advantages, preparation needs, and security considerations for organizations looking to innovate.