Menu

With Growth of In-Production Kubernetes, Container Platform Providers Add New Capabilities

In-Production Kubernetes Use Expanding

Kubernetes-managed containers have escaped the confines of application development and pilot projects and are expanding rapidly in full production.  EG’s recent survey of container management revealed that Kubernetes is in production at over 50% of surveyed customers, with 60% of the customers running more than five workloads (applications), and 70% already running multiple clusters.   Within a year, 55% of customers expect to be running six or more clusters, driving higher requirements in global observability and management tools…. and challenges for IT Operations Executives.

Some customers are avoiding the IT Operations challenge by leveraging managed container services from providers like AWS, GCP, Azure and VMware.  But for customers seeking more flexibility, Container Management Platform providers are reacting to this race to productization with a range of new capabilities to support self-management of production scale out.  Recent announcements by Red Hat and D2IQ announcement offer some key examples.

Vendors Address the Scale Issue

To address rapidly scaling workloads, Red Hat (in Open Shift 4.11) has recently made autoscaling of application workloads easier and more flexible. Customers can now define custom metrics for scaling – leveraging an Open Shift automated operator to scale their application workloads without need for operator intervention.

D2IQ’s enhancements are focused on scaling out clusters.  DKP now includes what D2IQ calls “federated application management” for fleets of clusters.   Application lifecycle management – the ability to define the desired state of an application and then manage updates, revisions etc. over time – is a necessity for container management.  The ability to manage application updates and configuration across a multi-cluster environment (including large fleets) has become a must-have feature for any customer intending to scale-out.  DKP’s latest enhancement takes the capability to a new level, with the ability for the customer to define a desired state for a configuration of multiple applications running together.  DKP then performs the application lifecycle management as a single integrated activity, eliminating the need for the customer to define and manage the desired state of each application individually.  Customers using DKP 3.4 can define and execute application lifecycle management across multi-cluster (and multi-cloud) environments. This is a particularly valuable capability for customers deploying large numbers of clusters with similar container contents (e.g., an edge-based application), and/or where rolling updates are required to manage certain clusters as a group.

The company also announced a significant new capability in global observability.  As customer production environments scale (in both volume and complexity), administrators need automation to help maintain availability and performance.   DKP Insights, a predictive analytics tool that detects anomalies in Kubernetes clusters or workload configurations, has historically offered global observability, running an analytics engine on each Kubernetes cluster, and then accumulating information for reporting, alerting, and troubleshooting on the DKP management console.  But with this latest announcement, Insights also automatically checks workload configurations against preset definitions of best practices and suggests improvements to the administrator.  The admin can then drill down on the anomaly to get additional information, a root cause analysis, and suggested solutions.

D2IQ product executives noted that with this management and engine model, the “suggested improvements” could be enhanced to automatically enable deployment by the management system and application by the engines, in a continuous AIOps model.  However, as customers may not be ready to adopt an operating model which is fully automated, a logical interim step would be to write and package the YAML code necessary to deploy the suggestion, allowing for a quick review and approve “push button” deployment to alleviate DevOps toil.

Enterprise customers who see multi-cluster in their future will want to keep an eye on these efforts to see what develops.   Improvements in global observability and management – including automated operations – will be required to support the massive Kubernetes environments of the future.

Related Insights
Cisco Q2 FY 2026 Earnings- AI Infrastructure Momentum Lifts Results
February 13, 2026

Cisco Q2 FY 2026 Earnings: AI Infrastructure Momentum Lifts Results

Futurum Research analyzes Cisco’s Q2 FY 2026 results, highlighting AI infrastructure momentum, campus networking demand, and margin mitigation plans, with guidance reaffirming a strong FY 2026 outlook....
ServiceNow Buys Pyramid Does this Spell the End of the BI Dashboard
February 13, 2026

ServiceNow Buys Pyramid: Does this Spell the End of the BI Dashboard?

Brad Shimmin, VP and Practice Lead at Futurum, along with Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Digital Workflows, analyze ServiceNow’s acquisition of Pyramid Analytics. They explore...
OpenAI Frontier Close the Enterprise AI Opportunity Gap—or Widen It
February 9, 2026

OpenAI Frontier: Close the Enterprise AI Opportunity Gap—or Widen It?

Futurum Research Analysts Mitch Ashley, Keith Kirkpatrick, Fernando Montenegro, Nick Patience, and Brad Shimmin examine OpenAI Frontier and whether enterprise AI agents can finally move from pilots to production. The...
Amazon Q4 FY 2025 Revenue Beat, AWS +24% Amid $200B Capex Plan
February 9, 2026

Amazon Q4 FY 2025: Revenue Beat, AWS +24% Amid $200B Capex Plan

Futurum Research reviews Amazon’s Q4 FY 2025 results, highlighting AWS acceleration from AI workloads, expanding custom silicon use, and an AI-led FY 2026 capex plan shaped by satellite and international...
Alphabet Q4 FY 2025 Highlights Cloud Acceleration and Enterprise AI Momentum
February 6, 2026

Alphabet Q4 FY 2025 Highlights Cloud Acceleration and Enterprise AI Momentum

Nick Patience, VP and AI Practice Lead at Futurum analyzes Alphabet’s Q4 FY 2025 results, highlighting AI-driven momentum across Cloud and Search, Gemini scale, and 2026 capex priorities to expand...
NXP Q4 FY 2025: Auto Stabilises, Edge AI Platforms Gain Traction
February 5, 2026

NXP Q4 FY 2025: Auto Stabilises, Edge AI Platforms Gain Traction

Futurum Research analyzes NXP’s Q4 FY 2025 earnings, highlighting SDV design wins, edge AI platform traction, and portfolio focus, with guidance pointing to steady margins and disciplined channel management into...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.