KubeCon Paris: All in on AI and Driving Enablement

KubeCon Paris: All in on AI and Driving Enablement

KubeCon and the Container Native Computing Foundation (CNCF) was all about AI and how this next technology wave will be supported and driven by the community. Priyanka Sharma, Executive Director, CNCF, led off the keynote highlighting how end users had developed a sense of technical capability in the digital transformation phase and this experience is spilling into the AI age. The challenges, accurately observed, will be going from concept to production. In her observation, bringing in opinionated solutions (defined stacks and AI offerings) was building a walled garden.

In a sense, Priyanka is accurate. The speed of development and new approaches, tools, models, compliers, frameworks, etc. are spreading in the drive to capture the business value of AI. But she also acknowledged the complexity of the environment. There is the need for freedom to switch to the latest technology without worrying about optimizing the here-and-now investment. With an open market, the premise is that we will push the boundaries leading to more innovation. One of the resources CNCF just released is its cloud native AI reference architecture. Expect to see more tools in the form of training from CNCF.

Platform Engineering

The second major theme was platform engineering (PE), which we have written and spoken about for the past 24 months. PE is a very necessary function in the realm of scaling container environments. It is even more real with AI. One of the major sponsors, NVIDIA, highlighted the requirements for optimizing the use of GPUs, discussing new ways with Dynamic Resource Allocation managing with scheduling and hop distance, especially in the scale out environment. The need for fault tolerance and resiliency to repair at scale is the domain of the platform engineer, even with automation and observability. All areas require tools and investments. Despite this discussion, there is still a debate about whether platform engineers and developers are the same thing. This analyst group thinks not, especially at scale.

Container Native

Continuing on this general theme, there was a discussion on AI and container native. What are the differences and similarities and how do container-native platforms need to grow to accommodate AI? There was an interesting analog that if inferencing is the new web app, then Kubernetes is the new web server. The key distinction being in the requirements for managing data—collection, cleaning, etc. Why the analogy? We see most AI environments are built on Kubernetes. And just as the ecosystem grew up around web apps, the same will apply as AI grows around Kubernetes. But Kubernetes or cloud-native apps were built for CPUs not GPUs. As such, it is not designed with the data scientists in mind. They do not want to bother with the YAML etc. that is presented. Time for more innovation.

For AI, there is the need for the platform engineer, but in this case, they are calling the role the AI engineer or more commonly MLOps. Basically, the role is tasked with designing and managing a scale-out distributed heterogeneous systems complete with the massive data pipeline to support them. Hold on guys, we are about to get even more complex.

All in all, we are going for a very fun ride with AI and Kubernetes, and CNCF plans to be there every step of the way.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

The Growth of Kubernetes and the Platform Engineering Evolution

KubeCon 2022: From DevOps to PlatformOps

Exploring NetApp Astra and Kubernetes: Innovations and Insights – Infrastructure Matters

Author Information

Camberley Bates

Now retired, Camberley brought over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.

Her career spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.

Camberley provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.

She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.

Related Insights
Tenstorrent Galaxy Blackhole
May 4, 2026

Tenstorrent’s Galaxy Blackhole: Can RISC-V Processors Expand Fast Inference Globally?

Brendan Burke, Research Director at Futurum, reviews Tenstorrent's Galaxy Blackhole launch event featuring record inference performance through open standards and integrated RISC-V processing, accelerating Sovereign AI....
AWS Pushes the Agent Stack Quick, Connect Verticals, OpenAI on Amazon Bedrock
May 4, 2026

AWS Pushes the Agent Stack: Quick, Connect Verticals, OpenAI on Amazon Bedrock

Mitch Ashley, Keith Kirkpatrick, Fernando Montenegro, and Alex Smith of Futurum Research share their analysis of AWS’s What’s Next event, where Quick, Connect verticals, and OpenAI on Amazon Bedrock reposition...
Atlassian Q3 FY 2026 Earnings Show Continued Cloud And AI-Led Expansion
May 4, 2026

Atlassian Q3 FY 2026 Earnings Show Continued Cloud And AI-Led Expansion

Futurum Research reviews Atlassian’s Q3 FY 2026 earnings, focusing on Cloud momentum, AI adoption via Rovo, and Service Collection traction, with takeaways for enterprise workflow and ITSM strategy....
Twilio Q1 FY 2026 Earnings Show Accelerating Voice and Messaging Demand
May 4, 2026

Twilio Q1 FY 2026 Earnings Show Accelerating Voice and Messaging Demand

Futurum Research reviews Twilio’s Q1 FY 2026 earnings, focusing on accelerating voice and messaging demand, growing multi-product adoption, and how AI-driven use cases are shaping Twilio’s platform direction....
Amazon Q1 FY 2026: AWS Momentum Builds as AI Infrastructure Spend Surges
May 4, 2026

Amazon Q1 FY 2026: AWS Momentum Builds as AI Infrastructure Spend Surges

Futurum Research analyzes Amazon’s Q1 FY 2026 earnings, focusing on AWS re-acceleration, custom silicon expansion, and agentic AI product moves shaping near-term spending and longer-term positioning....
Microsoft Q3 FY 2026 Earnings Show Cloud Growth, With Capacity Still Tight
May 4, 2026

Microsoft Q3 FY 2026 Earnings Show Cloud Growth, With Capacity Still Tight

Brad Shimmin and Futurum Research analyze Microsoft Q3 FY 2026 earnings, focusing on cloud demand, Azure capacity constraints, Copilot usage intensity, and the shift toward user plus usage commercial models....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.