Driving AI Infrastructure: Innovations from Dell and Broadcom – Six Five On The Road

Driving AI Infrastructure: Innovations from Dell and Broadcom - Six Five On The Road

The future of AI hinges on the performance and efficiency of its underlying infrastructure. 👷

At Dell Tech World 2025, hosts Patrick Moorhead and Daniel Newman are joined by David Schmidt, Senior Director, Compute Systems and Software, at Dell Technologies and Jas Tremblay, GM, Data Center Solutions Group, at Broadcom. They discuss the evolution of the AI server ecosystem, emphasizing the importance of internal connectivity, platform innovation, and infrastructure benchmarking for enabling next-generation AI workloads.

Highlights include:

🔹Evolving AI Server Ecosystems: They explored how Dell is strategically planning its platform roadmaps amidst rapid industry innovations in GPU, XPU, and CPU technologies, emphasizing the critical role of open interconnect standards.

🔹Broadcom’s Connectivity Innovations: Broadcom’s advancements in PCIe switch technology are central to its AI strategy, highlighting the vital role of internal server connectivity, storage, and Ethernet in maximizing AI server performance.

🔹Simplifying AI Infrastructure Complexity: Dell and Broadcom’s approach to performance addressing customer challenges, navigating complex AI infrastructure with benchmarking and leveraging third-party evaluations for clarity.

🔹Enhanced Storage for AI Workloads: A key highlight was Dell’s PERC13 storage solution, developed in collaboration with Broadcom, and its significant impact on boosting AI server capabilities and efficiency.

Learn more at Dell Technologies.

Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is On The Road here in Las Vegas. We’re at Dell Tech World 2025. It is AI all morning, noon and night. It is awesome. We’re talking hardware, software, and wrapping the bow with services here. Daniel, is this great?

Daniel Newman: Yeah. You know, for a couple of geeks, we really do love this stuff. And it’s been an amazing couple of days of really bringing the technology to life. And you know, Pat, all this stuff is so interconnected. It’s not a piece of hardware, it’s not a rack system. We really are building experiences, changing the world. And I think that’s a lot about what Michael, what Jeff said and all the keynotes that we’ve heard is taking it, bringing it to life, making these experiences real. And of course, the conversations we’re having up here all day have been about that, too.

Patrick Moorhead: That’s right. I mean, part of making it real is having the right compute, the right storage, the right networking and pulling it all together. And Daniel, as we’ve discussed in The Six Five a lot, having the right networking can be the difference between being able to run that training or inference run or stalling it. GPUs are left stranded, and who wants stranded GPUs or even XPUs.

Daniel Newman: Yeah, we’ve got a lot of compute, but we do need all these computers to talk to each other. We need them to scale up and scale out. And so sometimes networking is a bit of the unsung hero.

Patrick Moorhead: That’s right. One of the big partnerships we’ve been monitoring is. Is across Dell and Broadcom. And here to talk about this and the whole ecosystem is Jas from Broadcom and David from Dell. Great to see you guys.

David Schmidt: Yeah, thanks for having us.

Jas Tremblay: Having us.

Patrick Moorhead: Absolutely.

Jas Tremblay: Yeah.

Daniel Newman: Thanks so much for joining us. So, David, let’s start off with you. Put you on the hot seat. Are you ready?

David Schmidt: All right. Yeah, I’m ready.

Daniel Newman: All right. You know, there’s so much innovation going on. We’re seeing the pace. It’s across all compute architectures, CPUs, XPUs, GPUs. The roadmap becomes really important because people are trying to pick, when do I get in, how do I deal with what seems to be annual cycles? Talk a little bit about how you’re thinking about your platforms, how you’re dealing with these rapid cycle times.

David Schmidt: Absolutely. I mean, Daniel, you’re absolutely right. The refresh cycles are shortening. There’s a ton of diversity out there. There’s lots of options. You know, the first thing for us is just staying close to our partners. Staying close to our customers. That’s what DTW is all about. Right? This is my favorite time of the year you come out here. We engage with our customers, we get their feedback, but we do this all year round. We do it in a lot of different ways. So that’s kind of the relationship aspect of it from a product aspect. It’s about embracing openness and helping drive openness into our product sets. A great example. There is just interconnectivity both inside the server and outside. Like networking. Like Pat spoke about a moment ago. We’re talking about our new 9864 switch that has 64 ports of 800 gig speed. That’s something we built in partnership with Broadcom. We have our sonic distribution for AI that brings tremendous improvements to that open networking ecosystem that we also built with Broadcom. So we’re really excited about those things. And we think if you have that in place, if you have that open foundation, you’re able to embrace these technologies as they come on board and you need to get them deployed inside your data center.

Patrick Moorhead: So Jas us on The Six Five. We’ve talked to you and everybody up and down the chain. But can you talk about what you do at Broadcom, how it fits into Broadcom’s overall AI strategy? And then I’d love for you to talk about your 6th generation PCIe switch as well.

Jas Tremblay: Good. Let me break that down.

Patrick Moorhead: It’s like 14 questions.

Jas Tremblay: 14 questions. I got it, I got it. 1, 2, 3, 4. So everything is interconnected. So we’ve got five franchises or divisions inside Broadcom that are interconnected for AI. The first one is custom XPUs. Second one is ethernet switching for scale out and scale up. Third one is optics. Fourth one is surdes and retires. And then there’s my division. So my division is focused on inside the server. So if you open up a server, an AI server, one of these big guys, quarter million dollars, you’ll find XPUs GPU, CPUs, NICs, NVMe drives, and what all these elements have in common is PCIe as a protocol.

Daniel Newman: Yeah.

Jas Tremblay: So one of the first product lines that I have is PCIe switching for the internal fabric inside the AI server and then Ethernet NICs to connect to the network and storage connectivity, which we’ll talk a little bit more about. I’m glad you mentioned the 6th generation PCIe switch.

Patrick Moorhead: I think I was quoted in your press release.

Jas Tremblay: You were.

Patrick Moorhead: I think I was.

Jas Tremblay: Thank you. For that? Absolutely, yes, absolutely.

Patrick Moorhead: Listen, I like to put my name on high quality stuff.

Jas Tremblay: Yes, well, we’re, we’re super happy about the quality. So we’ve been doing PC switches.

Daniel Newman: No victory laps. Morehead.

Jas Tremblay: Keep going, Dan, you’re going to get the next quote. So we’ve been doing PCIe switches for 25 years and we’ve been first to market in production with the first five. And on PCIe Gen 6 we repeated the same thing. We’re in production now with our PCIe switch. And there’s two reasons why this is important. The first one is as a whole industry system providers, chip providers, we all need to go from PCIe Gen 5 to Gen 6. These transitions are hard and you need what we call the golden node. So it’s that chip that comes out first that you can count on and interconnect with. So a lot of our shipments right now are for some of our competitors, some of our ecosystem partners that need to bring up their solutions. And then the other part, which we’re working on with Dell and so forth, is building the next wave of AI servers. So with this new chip, PCIe Gen 6, we can have one port running at 1 terabit per second with PCIe bandwidth. So that’s a big, big jump in performance.

Patrick Moorhead: Yeah, for sure.

Daniel Newman: There is a lot of performance to consider right now and there’s so much optionality, it’s so complex. And you know, you and I talk a lot about this, right? Who’s making money on AI right now? It’s infrastructure, silicon, and it’s consulting. And so what’s interesting here is, you know, this panel, you sort of bring everything to the table of where there’s dollars to be made. But the dollars are to be made in both the opportunity that we know AI is going to present the 20 trillion or so that we estimate in economic value. But ultimately, someone has to help companies make this stuff work. And that’s what gives CEOs and boards indigestion, is they’re like, yeah, we want to do this stuff, but we’re spending a fortune. We’re not sure what kind of return we’re going to get. So, David, start with you, but I’d like to hear from both of you on this one. Talk a little bit about how you’re sort of breaking down complexity. How are you helping customers deal with this, guide them through. Of course you’ll make money selling them servers, selling them networking, but how are you making sure they’re also successful in their journey?

David Schmidt: Well, there’s truth in the data. What we enable for our customers to consume the type of data we want to provide is understanding their performance capabilities, what you can get with each technology. We have a tremendous technical marketing organization. We work internally on performance characterization, understanding how these different subsystems operate. When we bring new technologies to the table, like some of the things we’re talking about here, we want to characterize them and then talk about the use cases where they’re going to make the most sense in a lot of cases as well. We’re going to partner with great teams like Signal65, this company.

Daniel Newman: We know those guys.

David Schmidt: You know those guys.

Patrick Moorhead: Thank you. Yeah, there we go.

David Schmidt: There you go.

Daniel Newman: Long shot.

David Schmidt: Little, air fist bump. So we’re going to partner with folks like you to go put these performance white papers out and put them in our customers hands because it’s going to help them make the right decisions about these different technology waves.

Patrick Moorhead: Right, Excellent. So I want to move to AI storage.

Jas Tremblay: Yes.

Patrick Moorhead: Okay. I mean AI COMPUTE is sexy and fun, but if you’re not connecting the compute to the storage, well, the memory on the GPU or the XPU and everything together, you’re going to have a suboptimal solution here. This week you launched Perc 13 and I wonder if you could walk us through what’s its role in AI service because I don’t think a lot of people have actually heard of this.

Jas Tremblay: You want me to start off on that?

David Schmidt: Absolutely, please.

Jas Tremblay: So let’s start off with PERC. What does it stand for? Power Edge Rate Controller. So effectively it’s a little board with a storage controller on it to interconnect the CPU to drives. It does the protocol conversion depending on the type of drive that you have, and more importantly, does the data protection. For example, if I have a server with 16 NVMe drives, one of those drives goes bad, and you can reconstruct the complete data with the RAID controller. It takes care of all the data protection. The other thing is, imagine you have a server and you’re in the process of sending data from the CPU to the drives, writing it and then you lose power. We’ve put a little super capacitor on the PERC controller so that it saves the writes and flights. Really robust from a data protection perspective. So we’ve been working with Dell for many, many years, decades. And now we’re introducing PERC 13. So one of the questions you asked is how is this applicable from an AI perspective? Well, the first thing is there’s some AI workloads that don’t need GPGPUs, they just need a server with the fastest compute. Example what we have on the show floor right now, we’ve got. Two dual cores, 192 cores each, dual sockets, 384 cores. You need a lot of storage performance, performance to feed that monster. And if you’re running a large database with some AI type workload, maybe not training or inference, but just a large database, you want to have the fastest CPU, fastest networking and fastest storage. And the fastest storage is local. And if you want to protect it, PERC 13 is the fastest weapon available out there. The other part is feeding the large AI GPGPU service servers. And you need to do data conditioning. So taking all that data, conditioning it, and in a lot of cases, having just a high performance compute server with strong storage can do the job for that.

Patrick Moorhead: Cool.I love it.

Daniel Newman: So David, quickly help us make this real. As you take their technology, package it up, you put it into your systems. What are the real applications that you’re sort of seeing that the capabilities of PERC 13 deliver value to customers?

David Schmidt: So little shout out to our friends at Broadcom. You said 25 years earlier we’ve been doing storage controllers PERC for 25 years now as well. Right. And so we’ve really enjoyed that. The old school principles of protecting your data at the hardware level have never been more important. As Jas just explained, the most important asset in your compute environment is your data. So you have to build that protection at the hardware level. And then there was a conversation I had this morning that I think sums it up perfectly. As customers think about protecting that data at the hardware level and then they think about all the operations they have to run as part of an AI workload, an AI use case, whether it’s inferencing or small language model training, they have to think about the throughput, the overall performance, and there’s been some reluctance to combine that protection that you get with Percy with the overall needs for performance. I think PERC 13 just solves all of that and that would be probably the number one takeaway is all of that is captured in this great product that we put on top of our next generation poweredge servers. We’re really excited about what our customers are going to be able to do with that within their environment.

Daniel Newman: Well, David and Jas, it sounds like a very strong partnership. It’s been very impressive from our end at Signal 65 as we’ve worked on a number of these different test performances. Jas you recently shared a PERC 13 asset that we actually created collectively here as a group. And I think hopefully everybody out there that’s interested in learning a little bit more about the technical behind this gives it a look. Pat, we’ll have to make sure we share that out. Right.

Patrick Moorhead: Let’s put it in the show notes for sure.

Daniel Newman: I mean, you know, we don’t like victory laps, except when we do so. Jas, David, thank you both so much for joining us here on The Six Five. Thanks, guys.

Patrick Moorhead: Appreciate it.

Daniel Newman: Thank you, everybody, for being part of this Six Five On The Road. We are here at Dell Technologies World 2025 in Las Vegas. We’re going to step away for just a moment, but we will be back soon, so stick with us.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Saurabh Kapoor, Director of Product Management & Strategy at Dell Technologies, joins Patrick Moorhead and Daniel Newman to share his insights on AI fabrics and how Dell's AI Factory is addressing key networking concerns and performance bottlenecks in AI workloads.
OpenText Deepens Its Business Optimization Strategy, Linking Workforce Cuts to Its AI-First Operational Model as Part of a $550 Million Savings Plan
Keith Kirkpatrick, Research Director at Futurum, shares insights on OpenText’s AI-led restructuring strategy and its implications for long-term cost savings and operational efficiency.
James Wynia from Dell Technologies dives into the world of LPO, illustrating how this innovation redefines connectivity in data centers, featured at Dell Tech World 2025.
Platformization and AI Investments Shaped Palo Alto Networks’ Q3 Performance, as the Company Balanced Internal Execution With an Evolving Industry Landscape
Fernando Montenegro, Vice President and Practice Lead with Futurum, digs into Palo Alto Networks Q3 2025 results. In focus: the key movement towards platformization and AI momentum within a competitive landscape, as the company advances toward long-term targets.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.