Microsoft’s AI Approach Dissected: What It Plans for AI

The News: Microsoft’s AI research, products, strategy and plans are continuously being developed and refined in a process that is now leading the company to share its overall approach to AI with the world. By further dissecting its AI approach, Microsoft says it hopes to share its commitments to encouraging the responsible use of AI for humanity. Read the full Press Release about Microsoft’s AI plans on the company’s web site.

Microsoft’s AI Approach Dissected: What It Plans for AI

Analyst Take: Microsoft’s AI strategy has been growing and developing over the last decade as the company looks to balance the benefits of AI with its inherent challenges and social responsibilities. I believe this is a commendable and wise effort by Microsoft and for all technology companies that are working to harness the powers of AI for profit while working to inspire and create new innovations.

With its February 17 press release that lays these plans out in the open, Microsoft is taking a big step to share its AI approach with the world. I appreciate the company’s effort.

So, what is Microsoft’s approach to AI?

What impresses me most is that Microsoft at least acknowledges that the use of AI technologies comes with a responsibility, and that it is taking this seriously by practicing responsible AI by design. Ruled by a core set of principles including fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability, responsible AI by design is being used by Microsoft across the company to set up guardrails that will ensure that the goals are met and exceeded.

To make that happen, it means that these concepts must be considered during the earliest stages of a product’s design when it includes AI, and developers up and down the development path must be encouraged to commit to reaching these standards. It will always be a tall task, but I believe that it is doable and that Microsoft has the tools and commitment to make it happen.

On Governmental Regulation of AI

Microsoft said it also believes that proactive, self-regulatory efforts by itself and other responsible companies can set up strong controls for AI. Conversely, the company also admitted that nations around the world may also choose to create new AI-focused laws to guide continuing AI development since not every company or organization will adhere to equitable and honest self-regulation.

I respect Microsoft’s views on this, but ultimately I believe that only through the formal enactment of laws by nations and organizations like NATO and the European Union will the world truly secure the best deterrents to AI abuse by lax companies. Self-regulation by for-profit companies has not always worked well in our world. One must only peruse decades of headlines about air, water and other environmental pollution by companies to understand how self-regulation has failed in many cases in the past.

What could be helpful in any regulatory efforts when it comes to AI, however, is that Microsoft has pledged to publicly share its learnings and best practices about AI, along with the voluminous and helpful tools it uses to guide its efforts. That includes the company’s Responsible AI Standard, which is a framework that translates high-level principles into actionable guidance for Microsoft’s engineering teams. These are helpful tools from Microsoft that will certainly be beneficial as these discussions continue among other companies and nations.

Microsoft also laid out its similar approaches to the safe and good use of its guidelines in AI research, AI infrastructure, and in using AI for social good to drive improvements in accessibility, digital literacy, sustainability and climate change, human rights, cybersecurity and other societal challenges. I laud these commitments and hope that these compassionate moves are also being made by other companies that are working with AI.

Overview of Microsoft’s AI Approach

Microsoft said it sees AI as the “defining technology of our time,” and with that, the company acknowledges that is has great responsibilities to ensure its safe use around the world.

By taking its measured, planned, and always developing approach to working safely with AI, Microsoft is showing its proper responsibility as a global technology leader for this still nascent technology, which has both incredible capabilities and dastardly dangers. I am pleased to see Microsoft take these responsibilities so seriously, but it is exactly how I imagined that this company would react.

Accordingly, Microsoft is also honest in admitting that its planned AI approach cannot eliminate all risks associated with the use of AI. I believe this is a critical acknowledgement toward keeping the company constantly on its toes about the dangers of AI in everything it does. And this admission will also do much to ensure that AI will not be taken for granted in its use in Microsoft products.

Microsoft pledges to always monitor the risks of AI and to make changes and adjustments as needed to ensure that it continues to maintain a top to bottom responsible AI effort across the company.

“We have made huge investments in AI because we are optimistic about what it can do to help people, industry and society, and because we’re committed to bringing technology and people together to realize the promises of AI responsibly,” Microsoft said in a statement.

These are all laudable goals and practices from Microsoft. Based on its statements, I believe that the company will honor its words and will continue to work hard to meet its goals and AI responsibilities around the world. There will always be many questions that can be raised about AI today, but I believe that using Microsoft’s open and well-thought-out approach and guidelines could do much to also help other companies as they wrangle with these issues.

Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.

Other insights from Futurum Research:

A Wild Week as Tech Giants Microsoft and Google Reveal AI-Powered Search and Browser Integrations

AI from Microsoft, Google, IBM, Zoho Trident, GF’s GM partnership, and Elon Breaking Twitter – The Six Five Webcast

Microsoft Viva Sales Adds Generative AI-Powered Experiences, Helping Sellers Improve the Content and Timeliness of Email Communications

Image Credit: Microsoft


Latest Insights:

Mike Nichols, VP of Product Management, Security, joins Krista Macomber to share his insights on how AI is revolutionizing SOCs, from enhancing threat detection with Attack Discovery to boosting productivity through AI Assistant.
Chris McHenry, VP of Product Management at Aviatrix, joins Shira Rubinoff to share his insights on the timely and crucial convergence of cloud networking and security, explaining how this integration fosters better efficiency, security, and innovation.
IBM and Palo Alto Networks Announce a Strategic Partnership that Marks a Significant Development in the Cybersecurity Industry
Steven Dickens and Krista Macomber of The Futurum Group share their insights on IBM’s and Palo Alto Networks’ collaboration and Palo Alto Networks’ acquisition of QRadar.
Cisco Q3 Results Show Progress in Portfolio Synergies Across Networking, Security, Observability, and Data with the Splunk Addition Positioned to Spur More Growth
The Futurum Group’s Ron Westfall and Daniel Newman examine Cisco’s Q3 FY 2024 results and why Cisco’s progress with the integration of Splunk and the harnessing of portfolio synergies provides a positive framework for the company.