How Custom HBM is Shaping AI Chip Technology – Six Five On The Road at Marvell Industry Analyst Day

How Custom HBM is Shaping AI Chip Technology - Six Five On The Road at Marvell Industry Analyst Day

How is the ever increasing AI demand and workloads affecting memory? Six Five is On The Road at Marvell Industry Analyst Day to answer just that.

Hosts Patrick Moorhead and Daniel Newman are joined by Marvell Technology, Samsung Semiconductor, and SK hynix America executives—In Dong Kim, Sunny Kang, and Will Chu, for a conversation on the collaboration between Marvell, Samsung, and SK Hynix on custom high bandwidth memory (HBM) solutions, aimed at enhancing the processors driving accelerated infrastructure. This new approach to memory solutions is projected to increase memory capacity, optimize power usage, and reduce silicon waste, marking a significant advancement in the custom chip technology space.

Their discussion covers:

  • The collaboration between Marvell, Samsung Semiconductor, and SK hynix on developing custom HBM solutions
  • The key benefits of custom HBM, including increased memory capacity and optimization of power and performance
  • Insights into the customization process of HBM and how it delivers its advantages
  • The impact of custom HBM on AI processors and projections for its market adoption
  • The roadmap and future potential of custom HBM for various applications and industries

Learn more at Marvell Technology.

Watch the video at Six Five Media at Marvell Industry Analyst Day, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is On The Road here at Marvell Technology Headquarters here in Silicon Valley. And Dan, we are here discussing a major announcement, and that is custom HBM. It’s funny, five or six years ago there was the debate on do we even need any custom silicon? Right? And now, particularly with the hyperscalers, it’s pretty much… not all custom silicon, but a lot. And whether it’s XPU, whether it’s networking, whether it’s HSMs, I mean, custom silicon is all the rave. And now we have custom HBM, which is a subsystem inside of XPUs.

Daniel Newman: Yeah, it’s been a really exciting time, Pat, and you gave the histrionics pretty well. There was a period of time where we thought, “Hey, can we do everything off the shelf? Move really, really quickly?” Now we’re seeing this sort of transformation going on, and I think hyperscalers are all seeing the potential to build silicon, whether that’s to optimize their own software and their own workloads. And of course, they’re seeing it as a value add-

Patrick Moorhead: That’s right.

Daniel Newman: …to the customers that are building on their infrastructure. And so behind the scenes though, there are only a few companies in the world that are really enabling this technology. And of course, Marvell being one of them. And now we’re seeing Marvell building with the ecosystem to bring more memory. And HBM has been another one of those buzzes, Pat, right alongside the XPU. And here we go.

Patrick Moorhead: That’s right. So let’s dive in here. We have executives from Marvell, SK Hynix, Samsung, welcome, In Dong, Sunny, and Will to the show. Great to have you on.

Will Chu: Great to be here.

Sunny Kang: Thanks for having us.

In Dong Kim: Thank you.

Daniel Newman: All right, Will, I want to start with you. So you heard our little preamble. I think, I imagine all three of you would agree about the enthusiasm and excitement for custom silicon, but also we are seeing this kind of huge spike in demand for HBM. It’s been one of the actual ways I’ve been tracking this AI demand, AI trade is, “Look, how much demand, how long are we sold out of HBM?” And it’s certainly been a great indicator, but why go custom? I mean it seems like, oh, the solution’s working. Why is Marvell partnering with SK and Samsung and others to bring this custom solution to market?

Will Chu: Yeah, at a high level custom HBM, we had a press release today with some of our partners announcing this new breakthrough technology. And to your point, today, all of the AI systems use standard HBM and there’s a standard interface, but effectively at a high level, that interface is not scaling at the pace and the needs to support the hyperscaler customers. The interface can be improved and customized in a way that enables more silicon to go back to the XPU to enable more features and functions that can reduce power and ultimately enable more capacity and bandwidth to support all their workloads. And it requires customization from companies like Marvell to go do that. And we’re really pleased to be working with our partners to go make that happen.

Patrick Moorhead: Yeah, so let’s start with Hynix. Sunny, we heard a little bit about what custom HBM is, but what actually gets customized? I’m used to having a Jetic controller in there, that’s kind of the interface. It’s kind of a big chip today on HBM, but what is precisely getting customized and what are the advantages?

Sunny Kang: When we get started with HBM, we start with number of IO is thousand. And this year we are going to have the HBM four, is I think sixth generation it is. So we are going to double number of the IO is two KIO, we call that. That is okay, we extend it in both widths to double. We can make double bandwidth with that. But that means, as Will mentioned, it’s big burden for our customer in even controller guys, guys like Marvell. So in that perspective, how to minimize the burden of that. I think the region of the customization is starting from that point. And also on top of that, in the future if we look at the HBM development milestone in this industry, okay, Jetic defines it. HBM first generation to fifth, sixth, and seventh generation. Seventh generation becomes HBM 4E, and after that HBM 5, it’s time to define it. It’s time to have to in the standard of the HBM 5 in our hand, but it’s not. Even, we haven’t got started yet. So if you look at long-term milestone or our development roadmap, we need something to fill it out. I would say that is custom HBM, it is.

Daniel Newman: Right. Yeah, it’s a really exciting transformation. It sounds like there’s a lot of work to be done. And In Dong, I sense the enthusiasm from all three of you, which is great. I was very excited when I saw the presentation initially about this. What do you see? How does this evolve? How quickly does this get adopted compared to the HBM that we’re familiar with today?

In Dong Kim: Well, as Will mentioned, at this point, all of the ecosystem or customers are sticking to the standard solution. Timeline-wise, developing another product, even combining a new level of technology when it comes to packaging and the logic as well as the memory, it’ll be another, at least one year or two moving forward. That’s why Samsung made the head start in terms of adopting this custom solution from the HBM 4 and then moving forward, the market evolution, of course, the size of the opportunity that we are looking is tremendous. So we strongly believe that the custom HBM will be the majority portion of the market towards ’27, ’28 timeframe. And this kind of a close partnership with our partners should be able to help to solve a lot of problems and challenges moving forward. Because customization, especially switching from commodity to custom solution, it’s not a trivial thing. There has to be, a lot of things need to be taken care of, industry partner collaboration as well as some of the very core technology development when it comes to logic, packaging, and the memory. So combine all those together, we believe we have paths to get there. So we’re very excited to be part of that kind of a revolutionary path.

Patrick Moorhead: Yeah, the rate of change is phenomenal. Sometimes I step back and I wonder how all of this actually works at the end of the day, but it does work and it does work really great. So Will, the data center market is split between let’s say hyperscalers and the enterprise and then we have the enterprise edge. Curious, what kind of customers does custom HBM start and what type of devices? And on device side, we have GPUs and we have accelerators, or XPUs as Marvell calls them.

Will Chu: Yeah, so it’s a great question Patrick. So at Marvell, I mean we are estimating that the market for data center, the TAM for us in a few years, three or four years, is about $75 billion. So that for us is a huge opportunity. It’s about, I think last year was 21 billion and it’s going to grow to 75 over the next four years or so. Out of that, we estimate about 40, 43 billion is for custom-based accelerators. These aren’t actually GPUs, these are custom solutions that we would provide to the market. And alongside of that, what we see is this attach rate for custom HBM, right? So as all of our customers are moving to more intensity or higher levels of customization inside their infrastructure, as you mentioned in the outset, this is what’s pulling in the need for custom HBM. And fundamentally it’s still solving the same problem.

All of our customers see bottlenecks in their solutions as the… Because they all need better and better performance over time, but they have a fixed budget in terms typically of power and of space, and we’re trying to solve those bottlenecks. So we’re doing that with better and better silicon. And in the end, they actually have bottlenecks today on the HBM itself in terms of delivering performance for their AI workloads. And custom HBM alleviates many of those bottlenecks in the interface as well as the density, so that they can have more performing, higher performing AI solutions, which they clearly are trying to build today. So we’re in the forefront of that and that’s what is enabling this really fast and large growing market.

Daniel Newman: You brought together a few really important themes as well. I mean, one of them, and we’ve been tracking the accelerator market, and all the talk right now… Not all the talk because obviously some of the great successes over the last year, whether it’s been the partnership with Amazon that you’ve made. But the talk, big numbers, the big CapEx has been around the data center GPUs. But we actually have pinned the numbers for XPU to grow faster. And this is a combination of a few things. I mean, at least on our side as analysts, we believe that all the hyperscalers are going to want that benefit of vertical integration. They’re going to want to control the destiny of their workloads. And of course they’re going to want to make more money on servicing these, and building their own over time is certainly going to show higher ROI.

So you hit on that as kind of why this TAM opportunity becomes so significant. And I think that’s really an important point to make because at some point we will see the talk rotate and it will be more centric to XPU, it’ll be more centric to XPU and then the custom solutions that go with it. So I’d like to kind of tie this all together, In Dong, and then maybe Sunny, you can weigh in on this as well. When, and you started to kind of allude to this, but when do we see this hit the market? Is it a year? Is it two? Are you guys willing to make some prognostications? And I heard you Sunny kind of ran through the roadmap a little bit, but from the time this hits the market, does the roadmap accelerate the way we’re seeing other sort of generation to generation developments accelerate? Or are we going to try to keep the pace and manage the pace going forward?

In Dong Kim: Well, customization is clearly a need and the direction from the market and our customers. However, as I mentioned, it does need the time to develop and we’re talking about a top notch technology that combines state-of-the-art packaging, memory and logic combined together. So timeline wise, the proliferation of the market size, one of the research firm projected that the $38 billion market by 2029. So the early adoption will be starting from a year or two timeline and then a ramp up will be followed on through the ’27, ’28 timeframe. That’s how I would say in terms of the growth path of the adoption and the market when it comes to custom HBM, so.

Daniel Newman: Sunny, does that sort of fit your timeline as well?

Sunny Kang: Yeah, I 100 percent agree with In Dong about it. As I mentioned, HBM 4E is standard. The approach is finalized. It’s kind of a consensus of the industry, it is. So considering that, we are going to have standard HBM 4 by late ’26. And customization will be started from ’27. So usually in HBM industry, if we launch a new product, it takes just one or two years to be mainstream of the entire market. So that means around ’29 timeframe, as In Dong mentioned, it’s going to be mainstream product in HBM market. I’m pretty sure about it. Because on top of the, as I mentioned, we don’t have the HBM 5 definition at the moment. So what else could be solution around that timeframe? I would say is custom HBM.

Daniel Newman: That’s really exciting. Pat, you think about just the last two years, how much has happened in this space? We’re going out to 2029.

Patrick Moorhead: That’s right.

Daniel Newman: I mean, it’s going to be incredible to watch everything from model development to agentic AI, really hitting utilization, enterprise value, TCOs and measurement. One thing, by the way, I should have mentioned, I think you guys would all agree, and you kind of alluded to this a little bit, Will when you were talking, but the power and the challenges. We’ve heard some things from Marvell, we’ve heard some things across the industry from all of your firms. But the amount of power that’s going to be required to build these out and why custom can solve a lot of those, some of those problems obviously by improving the amount of processing, lowering the amount of power being used, more efficient. These are really important themes when you start, because it’s not just about one XPU, it’s the stacking of these things. It’s hundreds and thousands and tens of thousands. And as that builds out, every little bit of efficiency that can be gained.

Patrick Moorhead: Yeah.

Daniel Newman: Really, really matters. So gentlemen, In Dong, Will, Sunny, thank you so much for sitting down with The Six Five here. Really great. Congratulations on the partnership. We look forward to tracking and watching and seeing over the next couple of years how this comes to fruition.

Will Chu: Thank you for having us.

Sunny Kang: Thank you so much.

In Dong Kim: Thank you, Daniel and Patrick.

Patrick Moorhead: Thanks.

Daniel Newman: And thank all of you so much for tuning into The Six Five. We’re here on the road at Marvell Technologies Headquarters. We just unveiled a really exciting new partnership. The future of AI XPUs is going to be tied so closely to memory, and you heard some very interesting innovation here today. Hit that subscribe button. Join us for all of our content here on The Six Five. We appreciate you being part of our community. We got to go for now. We’ll see you all later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Daniel Newman sees 2025 as the year of agentic AI with the ability to take AI and create and hyperscale your business by maximizing and automating processes. Daniel relays to Patrick Moorhead that there's about $4 trillion of cost that can be taken out of the labor pool to drive the future of agentics.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Microsoft, Google, Meta, AI regulations and more!
Oracle’s Latest Exadata X11M Platform Delivers Key Enhancements in Performance, Efficiency, and Energy Conservation for AI and Data Workloads
Futurum’s Ron Westfall examines why Exadata X11M allows customers to decide where they want to gain the best performance for their Oracle Database workloads from new levels of price performance, consolidation, and efficiency alongside savings in hardware, power and cooling, and data center space.
Lenovo’s CES 2025 Lineup Included Two New AI-Powered ThinkPad X9 Prosumer PCs for Hybrid Workers
Olivier Blanchard, Research Director at The Futurum Group, shares his insights on how Lenovo’s new Aura Edition ThinkPad X9 prosumer PCs help the company maximize Intel’s new Core Ultra processors to deliver a richer and more differentiated AI feature set on premium tier Copilot+ PCs to hybrid workers.

Thank you, we received your request, a member of our team will be in contact with you.