Memcon 2024: Memory Technologies Are a Key Component to Scale AI

Memcon 2024: Memory Technologies Are a Key Component to Scale AI

In the second year of the Memcon conference, it was all memory and how to feed the “beast,” the GPU. In 2023, the focus was on CXL technology, how and where it would be adopted. Because of the explosion of generative AI, the commentary on types of memory has shifted to high-bandwidth memory (HBM) to address the bandwidth needed to maximize the GPU performance. Overall, there was a general agreement, CXL, while it addressed the capacity, it cannot deliver on the bandwidth needed for the training of generative AI applications. Rather we would see CXL memory modes applied to inference or more likely In Memory Database applications, such as SAP Hana.

In a very related note to generative AI and HBM, the community addressed the pressing issues of designing for the scale that this new AI will require. Those areas are advanced packaging for memory, for instance scaling HBM to 4, 8, 16+ layers. The question is how many layers before the signaling or mechanical design become issues? The second area is scalable networks. Is it PCIe or IB? These are big bets that organizations will need to make in their deployments. The trend seems to be Ethernet, but the high-end systems will probably cling to IB.

The next issue was cooling and energy discussed throughout the two days. The more processors, cores, and memory, the hotter these systems become. The major research labs have turned to liquid cooling. Expect this to become the norm for AI systems as they grow, at least self-contained systems with liquid. Perhaps we will be back with water cooled facilities in the near future.

Methods and techniques being deployed to overcome the memory constraints were also presented. Tejas Chopra of Netflix discussed the gyrations the company’s data scientists and programmers deployed, which spanned model pruning, efficient mini-batch selection, data quantization, and paging. Asked if this goes away with new memory offerings, he stated no. It just advances the capabilities of the current environment. Some of these methods were echoed by Shell (the energy company) and how it broke data into cubes to fit into memory or using more checkpointing and compression.

So what about CXL? Samsung was a major sponsor for the event and used the forum to launch its CXL Memory Module box CMM-B with 2 TB of memory for memory hungry databases and their HBM3M, 12-stack offering. There was some question as to when we are going to see this in real deployment—an understandable position given the 5 years we have been talking about CXL. Well, we are getting there. Very exciting were the partnerships with VMware and Red Hat in regard to joint development.

VMware will be releasing support for Samsung’s CMM-H, which is CXL 2.0 pooled memory. In a release planned for later in 2024, vSphere will support tiered memory. This will enable disaggregated memory to feed core capacity. The result: increased VM density per core, more memory for database-driven apps such as SAP Hana and cluster-wide memory for large-scale environments. Given all the noise on VMware licensing, this might give a bit of relief depending on how the next release is priced.

Red Hat and Samsung had previously announced the qualification of DRAM Memory Module (CMM-D) for pooling of memory with RHEL 9.3.

We are making progress with the memory constraints but do not expect workarounds to disappear. Rather, the applications will continue to consume whatever we can feed them. So back to work memory engineers!

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

The Six Five Talk Samsung’s Memory Tech Day

Memory Market: Call it a Comeback?

Marvell Industry Analyst Day 2023: Accelerated Computing Takes Off

Author Information

Camberley Bates

Now retired, Camberley brought over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.

Her career spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.

Camberley provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.

She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.

Related Insights
Cloud Enterprise
April 30, 2026

Microsoft’s Xbox Slide Puts Pressure on Cloud and Enterprise Ambitions

Olivier Blanchard, Research Director & Practice Lead, Intelligent Devices at Futurum, analyzes how Microsoft's sharp Xbox contraction is forcing the company to lean harder on cloud and enterprise software as...
Arm AGI CPU
April 30, 2026

Arm AGI CPU Goes to Market via Supermicro and Verda at 2026 OCP EMEA Summit

Brendan Burke, Research Director at Futurum, examines how OCP standards enable Supermicro and Verda to deploy integrated Arm-NVIDIA platforms optimized for agentic AI workloads....
Will Together AI and Adaption Redefine Fine-Tuning for Enterprise AI Teams?
April 30, 2026

Will Together AI and Adaption Redefine Fine-Tuning for Enterprise AI Teams?

Together AI and Adaption have partnered to embed Together Fine-Tuning directly into Adaptive Data, enabling enterprise teams to optimize datasets, fine-tune models, evaluate results, and deploy improvements within a unified...
Will ElevenMusic’s AI Platform Disrupt How Music Is Created and Monetized?
April 30, 2026

Will ElevenMusic’s AI Platform Disrupt How Music Is Created and Monetized?

ElevenLabs launches ElevenMusic, an AI platform letting creators discover, remix, and earn from fully licensed music while addressing copyright concerns that plagued earlier AI generators....
Engineering Determinism: Lovelace AI Seeks to Replace Naive RAG with Enterprise-Scale Context Engines
April 29, 2026

Engineering Determinism: Lovelace AI Seeks to Replace Naive RAG with Enterprise-Scale Context Engines

Brad Shimmin, VP and Practice Lead at Futurum, explores the launch of Lovelace AI and its Elemental platform. Discover how this new enterprise context engine uses knowledge graphs and entity...
From Silicon to Security: Architecting the Autonomous Enterprise at Google Cloud Next 2026
April 29, 2026

From Silicon to Security: Architecting the Autonomous Enterprise at Google Cloud Next 2026

Brad Shimmin, Nick Patience, Brendan Burke, and Fernando Montenegro analyze the Google Cloud Agentic Strategy from Next 2026. They explore how Gemini Enterprise, the Virgo network, and the Wiz integration...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.