Search
Close this search box.

Intel Developer Cloud: A Conversation with Intel’s Markus Flierl at KubeCon Paris 2024 – The Six Five In the Booth

Intel Developer Cloud: A Conversation with Intel’s Markus Flierl at KubeCon Paris 2024 - The Six Five In the Booth

On this episode of The Six Five — In the Booth, host Steven Dickens is joined by Intel’s Markus Flierl, Corporate Vice President, Intel Developer Cloud, for a conversation about how Intel is bridging the gap between hardware prowess and cutting-edge software development with its Developer Cloud, especially in the realms of AI, security, and performance at the KubeCon Paris 2024 event.

The discussion covers:

  • The role and vision behind Intel Developer Cloud and how it serves the developer community.
  • Intel’s expanding footprint in the software ecosystem, particularly at events like KubeCon.
  • Enhancements in AI capabilities, performance, and security within Intel’s software portfolio.
  • The significance of Intel’s hardware and software synergy for Kubernetes developers.
  • Intel’s approach to ensuring data security in the age of prevalent AI technologies.

Learn more at Intel.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Steven Dickens: Hello and welcome. I’m your host, Steven Dickens, and you’re joining us here for a Six Five Media In the Booth coming to you live from KubeCon in Paris. I’m joined by Marcus from Intel. Hey Markus, welcome to the show.

Markus Flierl: Thanks for having me.

Steven Dickens: So tell us a little bit about your role and what you do for Intel.

Markus Flierl: Yeah, so I lead Intel of cloud services, which is a collection of things that include Intel Developer Cloud, as well as Granulate, our performance optimization service, as well as a Cnvrg as well, which is our MLOps platform. And I joined Intel just two years ago. I came over from NVIDIA. At NVIDIA, built out NVIDIA’s cloud infrastructure. And then Intel, I’m focused on building out cloud services.

Steven Dickens: So the obvious question that I think a lot of the listeners and the viewers are going to be asking or wanting me to ask at least, is I think Intel, I think hardware. This is a cloud native Kubernetes event. Most of the companies here, if not all of them are software companies. Tell me a little bit about Intel from a software perspective.

Markus Flierl: Well, it turns out Intel is one of the biggest software companies in the world. In order to enable all the hardware, it takes a lot of software and we’ve been very aggressively building up our software capabilities over the years. And increasingly what we are also doing is in addition to just building out the software, part of it is obviously the lower levels of the firmware. Through Linux, we’re contributing to the various open source frameworks. But increasingly what we’ve done is actually also built up higher level services that we can make available to our customers to maximize the value.

So as an example, for instance, what we’ve done with Granulate, we build out all the software hardware capabilities. With Granulate, we can also optimize the performance of a workload. And so at the end of the day, what matters for the customer is what is my overall … How much can I get out of a given piece of hardware? And it’s a combination of what can I optimize in the hardware, but then what can I optimize at the higher levels of the stack that maximizes the value for the customer?

Steven Dickens: Of the key themes this week is platform engineering. And I think it’s interesting to talk about Granulate there. I think I’m starting to see this whole space mature. We’re starting to see cost optimization. We’re starting to see performance. Are you seeing that same trend? Is that what Intel’s seen with Granulate and was that behind the thought process there?

Markus Flierl: Yeah, so I think what’s happened is a lot of people have moved all the workloads into the cloud and it’s really convenient to spin up services, but then at the end of the month you get the cloud bill and it’s a big shock for everybody. And this is where Granulate can come in and can help. And it’s this thing with Granulate, the nice thing, it’s additive to a lot of other optimization tools. A lot of times the customer’s like, “Oh, we already optimized a lot of things.” And it’s like, “Well, what have you really optimized?” Well, we’re using various tools in order to optimize the instance types, for instance, that they’re using in the cloud. But what Granulate helps you do is really increase the efficiency and shrink down the footprint of the workload, which means a lot of cases you actually end up getting more performance even though you are actually using less resources. So you’re saving costs and the same time you’re getting better performance, which is not intuitive, right? You always think you add more money and that means …

Steven Dickens: It’s a trade-off between the two.

Markus Flierl: Exactly. But in this case, it’s like no, you increase the efficiency, you’ve shrunk on your workload, which means now it can actually run faster. You’re getting more throughput, you’re getting better response times, and at the same time you end up with a smaller cloud bill.

Steven Dickens: And as we start to see people going from more and more mission-critical workloads looking to get more transactions, starting to support different types of environments, that’s becoming more crucial.

Markus Flierl: Absolutely. Very much so.

Steven Dickens: I’ve got to ask, we are recording this in 2024, and we’ve gone a few minutes in and we haven’t talked about AI. It’s probably the first time that we’ve gone more than five minutes into any one of these episodes without talking about AI.

Markus Flierl: We’re starting the trend here.

Steven Dickens: I know we’ll get on trend. We are getting there eventually. But no, I mean, all joking aside, we obviously see a lot of from Intel in the hardware side, what you do with Gaudi and various other pieces. What’s going on from the software side, from your portfolio with regards to ai?

Markus Flierl: So the big investment that we’ve made, in addition to building out the Gaudi hardware, the GPU Mac series has been in the developer cloud. So in order to accelerate the adoption of this new hardware, we’ve actually decided that we’re making major investments and building out our own cloud environment where we can have developers come in, they can start with just a VM or maybe just one Gaudi card, they can go to whole Gaudi system. Or if they want to do some foundation model training, they can take an entire good host of thousands of Gaudis and they can get the work done there. So by doing that, that has really helped us to directly interact with the end customers and also allows us to get the feedback from them to say, “Well, you we have all these raw compute power.”

We should be seeing two higher performance than what they will be getting on the A100 or an H100. And if they’re not getting it, that gives us the opportunity then to work with them and make sure that also for all of the workloads that we get the feedback we can optimize things.

Steven Dickens: So Markus, we’re at KubeCon. Obviously the Kubernetes team for Europe are gathering here, all the communities here. Tell me a little bit about Intel and Kubernetes. What are you doing in the space? What are you contributing to the community? And really why people should think Intel when they think Kubernetes?

Markus Flierl: Yeah, first off, so as a company, we’re contributing, we have dozens of engineers and we are one of the main contributors actually to Kubernetes. And a lot of these optimizations up to this point, we’ve been upstreaming them. And then typically it takes a while for them to come back downstream. One of the things we’ve started to do is that with Granulate, with the acquisition of Granulate, that has really opened this up. We have an outlet for a lot of these optimizations that we’re doing, we can actually make those available through Granulate. And that means that with Granulate, if you’re running Kubernetes workload, a lot of times workloads keep on growing, your cloud build is growing. With Granulate, we are able to then actually optimize these workloads and minimize the footprint that minimizes your cloud fill essentially.

Steven Dickens: And that’s crucial as these workloads move into production, we start to see them scale in their size. We don’t want to see the corresponding increase in the cloud bill.

Markus Flierl: Absolutely. Absolutely. There’s more and more of these workloads are moving over to Kubernetes. I think that the problem even gets magnified and that’s where we can really contribute with Granulate. And it’s not just Kubernetes. And nice thing about Granulate is also works really well for any kind of big data workloads, any kind of data lake kind of workloads. In fact, we just had a session yesterday with Vijay Pranpumar from American Airlines and they were talking the optimization they’ve done in the data lake environment as well as with Kubernetes, as well as any other Java-based or go workloads that we can optimize with Granulate. It’s really powerful and it’s something that we can, it’s fairly quick. Typically you download the installer, the profiler, you run it for a week and then based on that you will see, in most cases, you’ll see instant results, actually.

Steven Dickens: Fantastic. So you mentioned there the contributions. That’s how this community checks your bonafides, if you will. Can you maybe double-click there around how big a contributor Intel is to a lot of the communities and projects that are here?

Markus Flierl: Yeah, so as I said, we have thousands of developers and we are contributing to Kubernetes, we’re contributing to the Linux, we’re contributing to various AI frameworks. We are contributing to Istio it. It’s really amazing for me, coming to Intel, I knew that Intel was contributing, but I didn’t realize the magnitude …

Steven Dickens: The scale of contribution.

Markus Flierl: The scale. Yeah, and specifically to your other question about when it comes to Kubernetes specifically, Granulate is actually an amazing tool when you want to optimize your Kubernetes workloads, granulate can help you. We just had a discussion yesterday with Vijay Premkumar from American Airlines. They have been able to shrink down the Kubernetes workloads by almost a factor of two by leveraging Granulate.

Steven Dickens: That’s huge. That’s huge.

Markus Flierl: It’s really, really powerful. And essentially, what it does …

Steven Dickens: As I say, we’re watching the track of Kubernetes mature, platform engineering’s kicking in. I think it’s that type of actual specific example of we’ve gone out of the tinkering phase, we’re now into production. We need to optimize. Maybe paint for me, if you would, the Granulate story of exactly how they did that. How did they get that optimization?

Markus Flierl: So what happens is that you install the Granulate agent each into your Kubernetes environment, into your Kubernetes pods. We will then profile what your workload is doing. And we’ve detected inefficiencies. Oftentimes what happens as a DevOps person or people are more worried about getting the workload running, but not necessarily how to really optimize it. And with Granulate, Granulate will do that for you. It does it autonomously, meaning you don’t have to sit there and constantly optimize things. It’s just, it’s a service that you’re consuming and it will make sure that your pods are optimized. You’re not wasting them under CPU cycles. And based on the profiling data, we will then properly size those pods and containers. And in those cases, like the case of American Airlines, they were able to shrink down the workload by almost factor of two.

Steven Dickens: That’s huge.

Markus Flierl: Huge savings.

Steven Dickens: So Markus, as we start to think about wrapping here, what would be those three key takeaways as people start to think Intel and software? And maybe frame that in the perspective of Intel software and Kubernetes.

Markus Flierl: I would say the first one is definitely is efficiency. The performance story. What can I do there? And again, if Kubernetes is becoming the de facto standard across most workloads, I mean there’s always a legacy workload that still needs to run on my mainframe, but any of the modern workloads would be running on Kubernetes. A lot of AI workloads run on Kubernetes. And this is really where we can help with performance optimization with Granulate. And then in terms of any new AI work that people want to do, Intel Developer Cloud would be an ideal starting point for that.

The third one that is also top of people’s minds would be the Intel Trust Authority. And I think something where also whether AI or not AI workload, there’s all these supply chain attacks that we’ve been seeing over the last few years. My prediction is that with AI, I think the kind of threats and the kind of attacks they’re going to be seeing is going to get even worse. And I think with Intel Trust Authority, we can avoid a lot of these problems by doing the attestation. We are using the trusted domain extensions that are in our Xeon chips, for instance, and we provide an attestation service with Intel Trust Authority where we can actually guarantee that the hardware, the physical hardware that you’re running, has not been tampered with and the software deploying is what you expect to be deployed. And you can avoid anybody tampering with your CICD pipeline and injecting malicious code. Those kind of threat factors that we –

Steven Dickens: Intel’s been innovating for a long time with SGX trying to focus in on confidential computing. Is that trust authority part of that overall story?

Markus Flierl: Correct. Correct. So the SGX and TDX, that’s the underlying capabilities that are built into the hardware. So we have these trusted execution engine.

Steven Dickens: Taking that up the stack, basically?

Markus Flierl: Taking up the stack and then providing a cloud service. Intel Trust Authority is a cloud service that will make sure, it will essentially close the loop essentially and make sure that those underlying capabilities that we have is a service that will validate that it’s actually operating as expected.

Steven Dickens: So Markus, this has been a fascinating conversation, lots to unpack around Intel and their software portfolio. Thank you very much for joining me on the show.

Markus Flierl: Thank you for hosting me.

Steven Dickens: You’ve been watching another episode of Six Five Media coming to you live from KubeCon on the Intel booth. We’ll see you next time. Thank you very much for watching you guys.

Author Information

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

SHARE:

Latest Insights:

An Analytical Look at Lattice’s Q3 FY2024 Earnings, Strategic Cost Reductions, and the Company’s Focus on Long-Term Market Expansion
Bob Sutor, VP and Practice Lead of Emerging Technologies at The Futurum Group analyzes Lattice Semiconductor's Q3 2024 results, examining the company's strategic cost reductions, AI-PC partnerships, and leadership transition to drive long-term growth.
AMD Is Developing AI-Focused Infrastructure Solutions and Competitive AI PC Processors, Positioning Itself in the Enterprise and Personal Computing Markets
Olivier Blanchard, Research Director at The Futurum Group, analyzes AMD's Q3 2024 performance and AI advancements from the Advancing AI event, emphasizing AMD’s competitive push in data centers and AI PCs against Intel and Qualcomm.
Amazon’s Q3 FY2024 Earnings Driven by AI, Cloud Innovation, and Enhanced Retail Capabilities
Olivier Blanchard, Research Director at The Futurum Group, discusses Amazon’s Q3 2024 earnings, including the pivotal role of AI and cloud technology, AWS growth, and innovative AI shopping tools reshaping Amazon’s revenue and customer experience.
Bob Sutor, VP and Practice Lead for Emerging Technologies at The Futurum Group, summarizes his report on his talk at the Inside Quantum Technology Quantum+AI conference in New York City on October 29, 2024. The talk title was Quantum AI: A Quantum Computing Industry Perspective.