Why Has Kubernetes Taken So Long to Cross the Chasm?

Understanding the Market Adoption of Kubernetes and Future Investments

Why Has Kubernetes Taken So Long to Cross the Chasm?

The News: Container native and Kubernetes has been in the market 10 years, since 2014. For many, the process of widespread enterprise adoption has been slower than expected. In this article, we look back and forward at the dynamics bringing Kubernetes to the early majority market.

Why Has Kubernetes Taken So Long to Cross the Chasm?

Analyst Take: I am often asked about enterprise IT adoption of container native systems. From the infrastructure perspective, some expected that we would have seen widespread deployments by now, requiring a persistent data infrastructure or data protection. And while my earlier blog post stated that we have arrived at the early majority phase, it will still take time. Why?

I have likened the transformation of container native environments to what we experienced with the shift from the mainframe to the open systems and three-tier architecture. Migration required a rewrite of the applications and, in the process, a revision of the user experience.

Container Native vs. Virtualization

Container native adoption cannot be likened to the adoption of VMware for virtualizing the environment. Virtualization is an abstraction of the hardware and was implemented by IT operations. It did not need redesign or reprogramming of applications, though we did see that supporting technologies, such as data storage and data protection, needed to adapt to the new architecture. In the process, we saw a rush to solve problems relating to IO performance bottlenecks and management complexities, and the creation of strategies for availability and protection. This progression was pretty rapid. Thus, with this memory of recent times, many thought containers were on the same trajectory.

Where We Are Today

Consider where we are today, the current applications require a rewrite to be container native and adapt to the Kubernetes world. We have heard numerous times that “lifting and shifting” to the cloud did not make life easier for IT. Just because the application was in the cloud, it was not cloud native/containers. As such, IT enterprises are working through a rationalization of the applications—what is to be rearchitected, what is to be maintained, and where to place the service for the most effective deployments. For those that have lifted and shifted, there is some talk of moving back, but for the most part, it is a review of how the application can be serviced and maintained the most cost effectively.

Am I conflating container native adoption with cloud native adoption? Yes and no. Containers have been the domain of the cloud (thank you Google for Kubernetes). Today, we see large-scale deployments on-premises and the number of such deployments is increasing. IT enterprises are seeing success with supported open source systems (Red Hat Open Shift, SUSE Rancher), and there is an increase of offerings specifically to stand up these environments (Dell APEX Cloud Platform, Hewlett Packard Enterprise (HPE) GreenLake, Nutanix, and IBM Fusion, to name a few).

Explosion of AI and Generative AI

Now we are into the explosion of AI and generative AI. These applications are built on container native platforms. For that reason, container native applications just got a huge tailwind for adoption. Still, this growth is in application-based adoption, not driven by IT operations.

Thus, what will we see in 2024 as IT operations and developers pick up the pace for container native applications? We will see the need for:

  1. More databases and database as a service. This need is not SaaS for Oracle, rather, it is a management of databases that can be selected by the developer. Another IT group is managing the versions and software upgrades and creating an environment for easy provisioning.
  2. Better observability. This functionality enables better understanding of the execution and operations of a container application. Be able to problem solve and triage a poorly functioning application.
  3. Availability and resilience. Although the nature of containers is ephemeral, there is still the need for resilience. This need will not go away, especially as complexity ensues and data continues to become more prevalent.
  4. Security. Need I say more than open source can become an exposure, and cyber threats will continue their march.
  5. Data storage. Data storage, as with the database adoption, comes with the need for persistence and management.
  6. Data protection. Once you establish data as an entity (not just scratch or training data), then preservation becomes important.

As with any new technology, an entire ecosystem is built around it, as gaps and issues appear. For containers, we are still in the beginning stages of determining what is needed and important. It is just slower than that previous transition with virtualized environments. Maybe AI will jettison us into this next dimension.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Unlocking the Gate to Digital Transformation – Research Study

Kubecon 2023 Chicago – Kubernetes Has Crossed the Chasm

Kubecon 2023 Live from Chicago – Infrastructure Matters, Episode 20

Author Information

Camberley Bates

Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.

Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.

Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.

She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.


Latest Insights:

Oracle Exadata Exascale Debuts Aiming to Unite the Best of Exadata Database Intelligent Architecture and Cloud Elasticity to Boost Performance for Key Workloads
The Futurum Group’s Ron Westfall examines why the Exadata Exascale debut can be viewed as optimally uniting Exadata with the cloud to provide customers a highly performant, economical infrastructure for their Oracle databases with hyper-elastic resources expanding Oracle’s market by making Exadata attractive to small organizations with low entry configuration and small workload affordability.
Brad Tompkins, Executive Director at VMware User Group (VMUG), joins Keith Townsend & Dave Nicholson to share insights on how the VMware community is navigating the company's acquisition by Broadcom, focusing on continuity and innovation.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss AWS Summit New York 2024, Samsung Galaxy Unpacked July 2024, Apple & Microsoft leave OpenAI board, AMD acquires Silo, Sequoia/A16Z/Goldman rain on the AI parade, and Oracle & Palantir Foundry & AI Platform.

Latest Research:

The Signal65 study evaluated the overall economics of a VCF environment deployed with disaggregated vSAN storage compared to the same environment deployed with a leading SAN array. To create an even comparison between storage solutions, the compute environment was kept identical when modeling both solutions. The analysis modeled both environments over a 5-year period and compared costs of hardware, support, licensing fees, and operational costs. Operational costs were calculated using an in-depth analysis of the administrative time and complexity required for each solution.
In our latest Market Insight Report, Image Generation Technology for Enterprise Use, we define the technology that enables text-to-image generation, cover the potential use cases and benefits of the technology, discuss the technical models and processes behind the technology, explore the risks involved with using the technology, and focus on the current and future commercial market for the technology.