Search
Close this search box.

AI and Open Source at Microsoft – The Six Five On the Road

AI and Open Source at Microsoft - The Six Five On the Road

On this episode of The Six Five – On the Road, hosts Patrick Moorhead and Daniel Newman are joined by Microsoft’s Eric Boyd CVP, Azure AI Platform, and Brendan Burns CVP, Azure Kubernetes Service, for a conversation on how Microsoft is embracing open source for AI development and what it means for developers.

Our discussion covers:

  • The significance of open source for developers creating AI applications
  • Microsoft’s overarching vision and approach toward open source
  • Practical examples of Microsoft’s initiatives with open source and AI
  • Details on Microsoft’s leadership with OpenAI and partnership with Mistral
  • Utilizing Kubernetes services for AI app development and recommendations for getting started with these technologies

Learn more at Microsoft.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is on the road virtually talking about AI and open source. AI has been the topic of topics for the last 18 months. And as industry analysts, we are very busy. Dan, how are you?

Daniel Newman: Pat, I’m doing really well. And yeah, you’re right, this has been a big topic all year long. But it’s not just about AI, it’s really become more about how do we build tools? How do we deploy them? How do we continue to improve upon them? How do we govern them? And of course, there’s all kinds of focus, Pat, right now on the debate between open source and more closed architectures. Do we build as a community? Do we build in a vacuum? Because some companies maybe think they know better what the market needs. Others believe that open sourcing is the best path. And Pat, all together, what we know for sure is that AI is changing the rules, it’s changing industry, and those that can build the apps are going to be the most important in how quickly and how well this happens.

Patrick Moorhead: That’s great. I’m glad you brought that up. In fact, the last panel I did was on open versus closed innovation and AI. So let’s jump in, introduce our guests from Azure, Eric, Brendan, welcome to Six Five.

Eric Boyd: Thank you for having us.

Brendan Burns: Cool, thanks.

Patrick Moorhead: That’s great.

Daniel Newman: All right. Well, gentlemen, I really appreciate you both spending some time with us. Let’s start big picture. You heard my little bit of build up in this talking about open, talking about close, talking about architectures, talking about developers, but why is open source so important for developers building AI apps? Eric, I’ll start with you.

Eric Boyd: Yeah. I mean, if you look at the history of AI, really, so much of it has been built on top of open source, from all of the frameworks to the language. I mean, it’s largely written in Python and then their frameworks like Pytorch and Tensorflow on top of that. And then, there’ve been all these other frameworks, things like Langchain and Semantic Kernel. And all these different pieces really helping people share all of the pieces of what go together so that people can build the latest models and really push them forward. And it’s really helped accelerate this space. It’s why we are really where we’re at, is because we’ve been able to build on the work of so many people coming together for that.

Brendan Burns: Yeah. I think that notion of building on top is really the critical component. I think if you think back, one of the things I think about is without a scheduler like Kubernetes, the first thing any AI project has to do is say, “Okay. First get these 100 machines together, or these 1000 machines together and install this software for installing packages and get all…” Of that stuff is just no longer someone who’s doing AI’s problem, it’s been deferred to container images, container orchestration. It’s just assumed to be there and it’s assumed that the cloud’s going to provide it for you. And so, I think just that starting point, just that foundation before you even get into Pytorch and those sorts of things that, how the heck do I do a distributed system?

Because all these models, especially on the training side, are distributed systems. That alone accelerates innovation because it enables people to collaborate. You don’t have 14 different schedulers out there, and each model that gets learned has to learn how to use a different scheduler. So, I think that’s a huge foundational element. Yeah. I think, in general, what we see in cloud is that anything that’s undifferentiated heavy lifting just becomes open source because there’s just so much power in having a community work on it with you.

Patrick Moorhead: Yeah. Brendan, what do you know about Kubernetes? I mean, I think you might’ve been the co-creator or something.

Brendan Burns: A long time ago, getting older by the day.

Patrick Moorhead: It is amazing. If I look 15 years ago, I’ve been in the industry too long, as Dan reminds me, but the thought of Microsoft and open source 15 years ago, maybe it was 20, was just like, “Wait a second. Microsoft doesn’t do that.” And just an incredible pivot into open source and everything you’re doing in all the tool chains. It’s been really awesome to watch as not only in a past life being part of the industry, working in tech companies for 20 years, and now doing the analyst gig of the last 15. It’s been pretty cool. So question, and Brendan, I’ll just hit you. What is your vision and strategy for open source into the future? We’ve seen what you’ve done today, but what’s the North Star for you?

Brendan Burns: Well, I mean, I think the North Star actually is totally aligned with the mission statement of Microsoft, which is empower people to do more. And in particular, in the world of infrastructure, which is basically what I focus on. It’s enabling people to build applications they didn’t think they could before, to write and maintain. And especially the maintain part. I think we still see people, how do you build a resilient application? How do you build an application that you can roll out software every day to, and it doesn’t light on fire?

So, that kind of empowerment because we’re not going to be able to tell you what your application should look like. That’s your application, that’s your business, that’s whatever it is. But we know resiliency and reliability looks the same no matter what application you’re talking about. And so, that’s our opportunity, I think, to work on open source projects to help with that. You might say, “Well, why don’t you just bring it all in house and make it part of Azure?” But the truth is it’s a hybrid world out there. People are still running things on-prem. People are, for a variety of reasons…

Patrick Moorhead: Thank you for saying that. I mean, it’s what everybody knows, but I really appreciate you saying that. We don’t do a lot of shows where people in your position will actually talk about that. So, thank you.

Brendan Burns: Well, I think it’s actually critical, right? I said this about a lot of the open source we do. It’s because you want your solution to be the winner. You want your solution to be the thing that everybody uses. If you don’t acknowledge that people are going to want to run it in other places, people just find the solution that does go everywhere. There’s a variety of examples of that throughout the industry. So, open sourcing just makes sense effectively.

Daniel Newman: And you guys have memberships in a number of different initiatives, you’re active. I mean, Apache Foundation, Open Source Initiative, Pytorch Foundation. We had pulled some data on this. Microsoft is very active in the open source community.

Brendan Burns: Well, and the majority of the workload on Azure runs on Linux. That’s where our customers are too. And so, we want to make sure we meet customers wherever they are.

Eric Boyd: I think we’re one of the largest contributors to open source. I mean, even just looking across some of our projects from VS Code and things like that, there are just a ton of places where we’re contributing to open source. And so, yeah, it’s definitely in the DNA of Microsoft now. Open source is a key part of how we think about things.

Brendan Burns: Yeah. I think the funny thing I always point out to people is if you’re writing an open source project, whether you run it on Azure or not, you’re already using Microsoft Technology, you’re already building it on GitHub, you’re probably using VS Code, you might be using TypeScript. I want to remind everybody TypeScript, created by Microsoft. The creator of Python drives all this AI, Microsoft employee. So, it’s definitely not something that should surprise anybody at this point.

Daniel Newman: Yeah. There’s a lot happening here. So, with all these initiatives, all this investment, all this involvement in open source, Eric, talk a little bit about what Microsoft is doing with open source and AI, maybe some of the interesting projects and developments.

Eric Boyd: Yeah. Sure. So, there are a number of different ways that you can look at it, right? We’ve been focused on the infrastructure layer. And of course, that’s a really important layer for, how do we corral these thousands of GPUs to work together? So, we need Kubernetes as the framework to coordinate that. And then, what’s the framework that I say, “Hey, I need to now train this model across all of these different GPUs”? And so, there’s a host of open source frameworks, primarily Pytorch is what we use here. And so, there’s a lot of places that we’re looking at that. We then move up the stack to think about, how do we serve and inference these models as fast as possible? And so the ONNX Runtime is really focused on, how do I take an AI model and now make it run on a lot of different hardware as quickly as you possibly can?

And then, you can go all the way up the stack to the sets of models that you have. And so, if you come to AI studio, the Azure AI studio, you can see… Excuse me. If you come to the Azure AI studio, you can see a full plethora of models that we have just from all of our partners, from Hugging Face, from NVIDIA, from Meta. I mean, models like Llama 2, Mistral 7B. All of these different models we make available to people directly through the Azure AI studio. And customers can consume it one of two ways. They can take the model and manage their own sets of infrastructure and run their own Kubernetes clusters and manage the GPUs. Or for a lot of them, we’re running them as a service, what we call models as a service. And so, now you can just call directly and just with a token interface, here are my tokens and please serve this model for me. And so, really just trying to make it easy for people all throughout the stack to get the models that they’re looking for to help the customers get done what they’re trying to get done.

Brendan Burns: And I’d add onto to that. I think also a focus on working on SDKs and how do we bring… Do you get that developer who… I think what we’re seeing is a lot of the developers, they’re not AI developers, they’re developers who want to use some AI. And I think that’s a real shift over the last year, year and a bit. Maybe it’s been happening for a little bit longer than that, but I think that’s another real key differentiator for Microsoft and the way we think about it. And we want everybody to be able to take advantage of these APIs and the technology that is behind them.

Patrick Moorhead: It’s funny, people always are defining, “Hey, when did AI start?” And we all chuckle and say, “Hey, the first AI algorithms were created in 1960s. Then about 15 years ago, university of Toronto, object recognition, which then brought in out deep learning and then generative AI.” Dan and I were both at the Microsoft event with OpenAI and Sam Altman and Satya Nadella. I think we were one of maybe 50 people that got invited, thank you, by the way, to do that. But that’s what people refer to when, “Hey, when did this generative AI thing start?” And kudos to you for picking the right partner. It’s paying off quite well for your company and customers in many different ways. But it’s not just OpenAI. In fact, we’ve heard a lot about your recent partnership with Mistral. Can you talk about how Mistral fits in, maybe how it’s different than OpenAI, where you’re slotting that in versus AI. Or if this is a customer chooses. And we can go from there.

Eric Boyd: Yeah- I mean, maybe building on where you started, the overnight success that’s been 10 years in the making, right?

Patrick Moorhead: Exactly.

Eric Boyd: I mean, I’ve been in AI for a long time and we thought, “It’s just five years away from really hitting it big.” And that’s probably been 40 years of being five years away. But now, it feels like we’re at that moment and it’s finally hitting it big. And so, what’s the difference? Is the quality that we’re able to achieve with these models really through the work that we did with OpenAI to really push the frontier of what’s possible and create models that were of a size and scale that we’d never really seen before. But our commitment, really, to our customers is we want to make sure that they’re going to find the tools that are going to help them get their job done. And so, we’re always going to have the best of breed frontier pushing models in our partnership with OpenAI. We’re always going to make sure that that’s available to our customers.

But we also know that, as other model types are starting to show up, we’re starting to see a lot more interest in these small language models, models that are not the same scale, but maybe bring a different performance characteristic or maybe can be fine-tuned or tweaked in a particular way to really focus on a niche application. And so, Mistral really fits right in that fold there where they’ve been doing some really innovative work. They’re a pretty interesting company in France. And some of the things that they’ve been doing and their approaches to things have really aligned with our thinking in a lot of things. And so, we’ve been excited to partner with them and to bring their latest models onto Azure. And yeah, we think customers are really going to be excited about just the different capabilities that they can have with this range of models now.

Patrick Moorhead: Yeah. Also, if you look at public sector European companies, they want to be aligned with European companies, and I think that’s very fair. And in our research, Dan and I are seeing an incredible uptake in that very similarly to the way that some countries and regions look at high-performance computing. Everybody needs their lab where they’re going to do this. So, it’s good to see that relationship.

Eric Boyd: Yeah. I mean, from our side, that is a nice happy coincidence. We’re looking for who’s the best companies, doing the best, most innovative work, and they happen to be in France. But you’re right, there are companies who certainly are looking for a bit more of the regional specializations.

Daniel Newman: So Brendan, we did allude to the fact that you’re one of the founding fathers of Kubernetes, but its popularity continues to grow in both visibility and usability. The utility is palpable. But you’re working day in and out alongside the developers and developer organizations. Talk a little bit about how the two are intersecting with Kubernetes and developers building AI apps, Brendan.

Brendan Burns: Yeah. I mean, I always like to point out that there’s a lot of focus on the AI, but there’s a lot more to the app than just the AI. You have to have a mobile app or a web-based app. You have to serve the APIs that drive logins and preferences and all that stuff. And so, when you’re building one of these applications, even if you’re using the APIs that Azure ML and Azure OpenAI provide, you still have to write your code and run it somewhere. And we think that the Kubernetes service on Azure is really the best place to do that. And we’ve been focusing a lot of energy on building out the reference applications and the examples and the streamlining that you need to deploy an application like that. Let’s assume you’re going to use Azure OpenAI, you’re going to build a mobile application. What is the GitHub actions pipeline that you need? What is the deployment around the world that you might want to be able to do in case it takes off in one particular geo versus another geo?

Or even you mentioned the compliance regimes that are sometimes applied to these kinds of applications. And so, I think that end-to-end really is the thing that Microsoft, as a whole, can provide. We have a place for you to start with your source code. We’ve got great APIs in Azure for you to call. We’ve got the IDEs that you want to use. We’ve even applied the AI to GitHub Co-pilot to make it easier for you to write the code. And so, I think there’s just a comprehensive pipeline there for you to empower individual developers. Because again, I think that one of the things that I think happened, you talked about AI being five years out, I think what happened really was that it crystallized everybody’s imagination. Everybody suddenly was like, “Oh, my gosh. I have this idea and now I can do it.” And so, there’s this sudden influx of people who just really want to build a thing that’s in their head, and we want to help them get there as quickly and as easily as they can.

Patrick Moorhead: SI have to ask, is it Kato or Kaito, which is Kubernetes AI Toolchain Operator?

Brendan Burns: I’m probably going to get in trouble with the team, but I’ve been calling it Kato.

Daniel Newman: Can you call it Kaito?

Brendan Burns: Yeah. It’s like Kube Control. There’s two different ways you can pronounce the Kubernetes tool, as Kube Control or Kube Cuddle. I happen to be Team Kube Control, but I get in trouble with some of my teammates.

Patrick Moorhead: Listen, I mean, this stuff is very difficult. And you do have what I call bag of parts people who will put together everything on their own, and they’ll do it with a big happy smile on their face. But as we’ve learned, to get these tools in the hands of even more people, there have to be some abstraction layers or some connective tissue. Kato, I think, is a good example of that. It’s pulling together multiple open source projects and combining that with different models. Bring your own containerized model, think that’s great stuff here.

Brendan Burns: Anytime we do open source work, we hope that we’re going to get a community around it, and that maybe that community will eventually extend beyond Azure. And so, I think we’re trying to make sure it’s as easy as possible. Obviously, bringing it into the Azure Kubernetes service so that it’s super easy to light it up there. But I think we envision the idea that this is something that’s necessary for everyone in the community.

Patrick Moorhead: And by the way, thanks for bringing us back into reality that AI is not the application. It’s a really exciting part that makes applications a lot better, but there’s actually the application and needs provisioning that needs tools, that needs containers. So, thank you for pulling us back in there. I think we need a little bit more of a reality check sometimes as the industry talks through. That’s just-

Eric Boyd: I think that’s a really interesting point. A couple of years ago, I was a little frustrated that people weren’t really getting that AI should be taking off, and it took an application, ChatGPT, where finally it was accessible to people where they could understand how you could use it. So, I think that’s a really important point of AI is a really important tool in this application that you’re building and you have to knit it all together.

Daniel Newman: Often it’s the killer app that ends up driving something into the height of consciousness. We’ve been waiting for it for multiple decades. Companies like Microsoft have used advanced analytics, various machine learning techniques to better understand customer attrition and what’s going to drive a buying decision and next best action. And all of a sudden, we create something that completes your sentences and make search more digestible. And everyone’s like, “I can-”

Eric Boyd: Now I get it, now I see what it can do.

Daniel Newman: I mean, we’ve been using rendering… All the AI that’s been used to render the games we’ve been playing. I mean, it’s been everywhere. It’s just finally people took notice all of the same day.

Brendan Burns: And I think, though, also credit to Eric and his team, I think, for making it accessible too. Because if your imagination is captured, but it’s hard to use… Every time I see someone doing… Because there’s more to AI, I think, than just the generative too. Every time I see someone doing data processing on a handwritten form, I’m like, “Really? You could send a scan of that from your phone into Azure AI and it will just give you the text back.” Please stop typing it from this to this.

Patrick Moorhead: Love it.

Brendan Burns: Yeah. You need that moment of capturing what’s possible. Because that person who’s doing it, they don’t know what’s possible out there.

Patrick Moorhead: This is great stuff. And if our audience is still tuning in this far, they’re going to want to know, “Hey, how do I get involved? What are the resources? What are the next steps?” What would you recommend for people exploring and experimenting, or people who just want to scale? Eric, maybe we’ll start with you.

Eric Boyd: Yeah. I mean, to get started with AI, that’s really what we built the Azure AI studio for. It really tries to make it simple to get access to lots of different models, to get into a playground where you can try them out to start building the most common patterns that people have, which is usually I want to put some data in a database or in a search index, and then I want to retrieve that data and feed it into my model. So, the scaffolding for how to go and build those applications and really get you ready to get deeper into, “All right. Let’s build a really rich application out of that.” So, that’s really where we would tell people to start, is if you go to Azure AI Studio, you can find all the different things you need to really build everything that you want from there.

Brendan Burns: Yeah. For sure. I would also just add I think it’s great to also really play around with the existing stuff to get a sense for what these models are good at and what they’re not good at. Use it for real tasks, use it to plan your vacation, use it to do something. Because I think that we can imagine a lot of stuff, and if you play around with the existing chat-based models, like what’s in the co-pilot app, you get a good sense, I think, for what it can do today and what might be maybe a couple more years out into the future. And that’ll help you as well build a really good application.

Patrick Moorhead: Now, inside of AI Studio or Azure Machine Learning, I know one of you had mentioned the model is a service. So basically, go in and pick your models. And those aren’t just large models, those are some small language models as well. I think, Eric, you had talked about potential. Well, first of all, it’s a lot… If you can get something done with a smaller model, use the smaller model because the inference cost of getting it done is going to be smaller. And what I’m hearing from enterprises is even the, “Okay. Do I leverage my own model, and then fine-tune it with my data to get the results that I want? Or do I have to actually go in and create my own model that’s more focused, let’s say, for personal finance, for tax, for medicine?” We’re seeing a lot of these vertically oriented capabilities that people are leaning into. So, this would enable potential people to try these different things out and see the result that they might get. Is that-

Eric Boyd: That’s right. If you go to AI Studio, we have a model catalog which literally has thousands of different models in it. And so, then you can try out the different models. And that pattern you’re referring to of, do I take my data and use it to fine-tune an existing model and then specialize it for my task? Is when we’re seeing a lot of interest from companies in trying to explore and see, hey, is that going to work well for their particular use case? And so, absolutely we would encourage people to try it. And as Brendan said, you have to experience this stuff to see where does it work well or where are there gaps or problems with it.

Patrick Moorhead: And I have to ask, Phi and Orca, that definitely sounds like an open source name. Take me through what these two are, if you don’t mind.

Eric Boyd: Yeah. Those are two models that we’ve recently open sourced. The initial versions are very small, but we’ve focused on the ways that you’re building the models and particularly the data that you use to train these models. And with that, we’ve been able to get really strong performance, generally like a class higher in performance with a class lower in terms of size. And so, the FI 1.5 billion model competes very favorably with 7 billion parameter models. And so, if we can find the right applications for that, that you’re just that much cheaper then, you’re in a much smaller class of scale model with the same performance.

Daniel Newman: Well, listen, Eric and Brendan, it’s been really fun to go through this whole thing, break it down, some real geeky stuff here, which Pat and I love. And of course, we also really appreciate how you’ve taken some of these very complex technology topics and made them sensible for the business line and those that probably will be waiting for a generative tool to eventually do their coding for them. A whole nother debate for a whole nother day. But before I let you both go, any final parting words of wisdom for our audience here in The Six Five? Eric?

Eric Boyd: I mean, I think this is a really exciting space moving really quickly. And so, the things that we talked about six months ago are even pretty different now. And just making sure that you’re keeping close of what are the latest things and trying the trends with these small and open source models is really, pretty exciting. And so, I think customers are going to find a lot of applications and uses for them.

Brendan Burns: For sure. And I’d just call out check out the Kato project if you’re interested in actually running stuff. It’s up on GitHub and Azure, K-A-I-T-O. And if you happen to be in Paris, you happen to be going to KubeCon. We’ll be talking about it at the booth in KubeCon coming up in just a few days. So, come swing by and say hi.

Daniel Newman: Well, listen, I want to thank you both again very much for joining us. Thanks for the wisdom. We always appreciate having the team at Microsoft and the Azure team join us here on The Six Five. We’ll be tracking your progress. We’ll be watching. And of course, I love the encouragement to those devs out there to continue to test and play and build. That’s oftentimes where the great apps, the killer apps are founded, in places we least expect. So Eric, Brendan, thanks for joining. Let’s have you both back soon.

Brendan Burns: Absolutely.

Eric Boyd: Thanks for having us.

Daniel Newman: All right, everyone, hit that subscribe button, join us for all of our episodes here on The Six Five. We appreciate you being part of our community. But for this show, for Patrick Moorhead and myself, it’s time to say goodbye. We’ll see you all later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Mark Patterson, EVP and Chief Strategy Officer at Cisco, joins Patrick Moorhead and Daniel Newman to shed light on Cisco's strategic foray into AI, discussing its potential industry impact and the importance of collaboration in driving innovation.
Twilio’s Q3 Results Showcase Strong Financials and Cutting-Edge Technological Advancements, Positioning It as a Leader in AI-Driven Customer Engagement
Keith Kirkpatrick, Research Director at The Futurum Group, explores Twilio's Q3 earnings, focusing on how AI advancements and robust data integration are driving growth and transforming customer engagement for today's leading brands.
OpenText’s Strategic Focus on Cloud, AI, and Cybersecurity Propels Growth Despite Revenue Adjustments Following AMC Divestiture
Keith Kirkpatrick, Research Director at The Futurum Group, discusses OpenText's Q1 2025 results, reflecting strategic technological advancements and emphasizing growth in cloud services, artificial intelligence, and cybersecurity.
Gary Steele, Go-to-Market President at Cisco, shares his insights on enhancing digital resilience in the age of AI, emphasizing the synergy between Cisco and Splunk.