AI adoption is growing 📈 but challenges remain! The evolution of AI deployment is shifting from isolated experiments to integrated, scalable solutions.
At NVIDIA GTC 2025, host Patrick Moorhead is joined by Kevin Wollenweber, SVP and GM of Datacenter and Provider Connectivity at Cisco, to discuss the path to making AI accessible beyond the hyperscaler giants. They touch on Cisco’s announcement of the Cisco Secure AI Factory with NVIDIA, and the impact of their partnership on the future of enterprise AI.
Key takeaways include :
🔹Transition from AI Trials to Scalable ROI: Enterprises are now prioritizing AI deployments that deliver measurable business value, moving beyond experimental phases.
🔹Unified Infrastructure for Simplified AI: Cisco and NVIDIA are collaborating to integrate networking, computing, and security, creating a cohesive and manageable AI ecosystem.
🔹Streamlined Operations with Familiar Tools: Enterprises benefit from unified network management using existing Cisco tools, simplifying AI infrastructure operations.
🔹Enhanced Security and Future-Proof Innovation: Cisco’s AI Defense and a long-term engineering agreement with NVIDIA ensure robust security and continuous adaptation to evolving AI demands.
Learn more about the Cisco Secure AI Factory with NVIDIA.
Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Patrick Moorhead: The Six Five is On The Road here in San Jose. We’re at Nvidia GTC 2025 and as you can imagine, this show is buzzing and unsurprisingly, we are talking non stop about pretty much what we’ve talked about the last two years, which is AI. And AI is big. Right. And we saw most of the action starting off with hyperscalers. The next, natural wave of that is enabling enterprises. And to do that right, they need the right compute, server storage, networking, everything to work as a cohesive system and do it in an easier way. Because quite frankly, they don’t have the resources, they don’t have the skills that all of the hyperscalers have. To talk a little bit about this, I am really excited to introduce Kevin from Cisco.
Kevin Wollenweber: Thanks for having me, Appreciate it.
Patrick Moorhead: First time on the show. I know you and I have talked a lot before. I like to say, hey, we have a Zoom relationship, but I saw you at MWC in real life. Gosh, I have seen more Cisco AI conversation probably in the past six months than I saw in the previous 10 years. Not that Cisco wasn’t doing AI, but you’re just doing a lot more of that in different areas. So let me hit you with the first question. We’re here at the big show this week. What are you hearing? What have your conversations been focused on here?
Kevin Wollenweber: Yeah, and if you think about it, we were here last year as well. And a lot of last year was just about the desire for companies to trial things out or get involved in AI or just talk about AI in some way without a real understanding of usefulness, ROI and how they were going to drive things. And what I’m excited about is a lot of the solutions that we’ve announced this week and a lot of the solutions I see around the floor are around what people are actually doing with it and actually real applications that people can monetize and drive not only investment, but real ROI from.
Patrick Moorhead: Yeah, I mean enterprises, understandably. It started off with experiments, right. They had a top list of things they wanted to try, deployed some POCs and then you know, for some of those, they’re trying to scale those to deliver real value. Because in the end, and sometimes particularly the tech press or you know, just people looking at AI, they fall in love with the tech, but in the end you got to drive revenue, you got to reduce costs or increase stickiness with your customers that hopefully increases revenue long term. So what are you seeing? I mean, AI wasn’t invented two years ago.
Kevin Wollenweber: Right.
Patrick Moorhead: But in this New generative AI, agentic AI. What are some of the biggest challenges that your customers are experiencing?
Kevin Wollenweber: Well, so we’ve actually been involved in the AI wave, in that early wave with hyperscalers for a while now. We sell silicon, we sell systems that help build out that AI backend. But what we’re starting to see is exactly what you described. It’s, you know, most of our enterprise customers are not going to go and build their own large language models. And what we’ve seen over the last few months, even with the deep SEQ phenomenon, is the cost of leveraging these models, the cost of inference and the ability to drive ROI is actually increasing at a rapid pace. And so now we’re moving from how do I trial, prototype and play around with this stuff to okay, if I’m going to drive maturity and large scale AI applications in my network, what does that infrastructure look like and how can I evolve what I have to now start to run some of these AI workloads.
Patrick Moorhead: Yeah, and part of, part of this simple play, making things a little bit easier, are partnerships. I mean, the reason why there’s so many people in this hall right now is nobody can go at this alone. It really does take a village here. I’m curious, you made some big announcements with Nvidia. Gosh, was it a week ago? Two weeks ago, here we are at GTC. You made some more announcements. Can you walk us through maybe what you had announced previously and then what you announced this week?
Kevin Wollenweber: Yeah, definitely. Let me take you back about a year. So we were here at GTC last year and what we announced last year was a deepening of an engineering relationship with Nvidia. We have always resold their GPUs and we’ve launched some new AI servers that can kind of bring in more GPUs and connect and everything we need to go and run some of these AI workloads. But right before the Nvidia earnings announcement a couple of weeks ago, what we announced was a broadening of that partnership where we were going to start to share technology between the companies. So if you think about it, our ability to do networking, and not only networking, but controllers and telemetry and orchestration and all the things that have to sit around that AI ecosystem as enterprises are going to deploy the technology was really critical for both us and Nvidia. And then our ability to take our partner and channel ecosystem and take those solutions and push them down deep into the enterprise was also really valuable. And so that sharing of technology means I can now take my silicon, my Nexus switches that we build for traditional workloads and high end enterprise and I can put those into the Spectrum X end to end platform or architecture.
Patrick Moorhead: Yes.
Kevin Wollenweber: So now a customer that’s deploying Nexus, they use our operations and management tools, they use our controller, Nexus Dashboard, they can deploy the Spectrum X architecture, the Nvidia Enterprise reference architectures or NCP architectures, but with our technologies. And so that’s been a big, big step.
Patrick Moorhead: Right. And you know, for our audience, can you help them understand what part of the network? So we’ve got the front end network, we got the back end network and the scale up network that connects GPUs. I know you’re not part of that. Can you talk about the other elements specifically that you’re helping with?
Kevin Wollenweber: Yeah, definitely. And so even in the scale up part, we now have an 8 GPU server that’s NVLink connected in it. So the NVL8 architecture that Jensen talked about, so that’s how we build our base building block for GPU based compute. And then when you want to sort of scale out and connect these, we build high scale Ethernet networks, Ethernet fabrics that are based on our forwarding technology, our operating system and our controller. So if a customer has a front end network that’s based on Cisco today, and they’re deploying Cisco Nexus, they can actually build up a backend, connect all their GPUs together with the same switching technology, the same operations, the same management. So as you think about enterprise as consumption, they want an easier route to market and they want to be able to leverage as much of the operational knowledge and understanding they have. And so the ability to take the same tools are deploying today, but deploy them in a real system based approach, which is what you need when you’re deploying GPUs is something that we don’t have in the market today.
Patrick Moorhead: And, and was the big challenge or the, you know, before the two of you came together, the value prop, you know, Enterprises, the smaller CSPs was essentially operating two different networks.
Kevin Wollenweber: Exactly.
Patrick Moorhead: Okay.
Kevin Wollenweber: And not even just two different networks, but they were two different networks based on different technologies with different operations and management. So a lot of times you’d have to train up an entirely new staff on how to manage that backend network. And the funny thing about the backend is it’s all the same components. You’ve got network and storage and compute and all the stuff you talked about before, but they have to operate as a single entity or a single system. And it’s not the same thing as just building out a network and attaching compute. We’ve been doing things with something we call hyper fabric, where we’re running Cisco agents on the Nvidia SmartNics and using those to actually connect all these compute devices together more efficiently, guarantee performance, and understand latency and connectivity between devices. So it’s a lot more than just an OEM selling their equipment. We’re building real engineered solutions together.
Patrick Moorhead: So you’ve made huge investments as a company on the data side and obviously on the security side. How does that play into this combined Cisco Nvidia solution?
Kevin Wollenweber: Yeah, no, it’s a perfect question because you can definitely take the Nvidia enterprise reference architecture, which is what we’ve done, and you can build out and scale out. And there’s plenty of OEM partners that are doing that. One of the big gaps that we saw in enterprise deployments was this idea of securing that AI infrastructure. And they have a security paradigm in their front end and in their traditional network. But as they brought in AI workloads, how do we actually tie security in? And so we’ve added, there’ll be more over time, but we’ve added two major components to that, one being something we call AI Defense. AI Defense is a new set of technology we launched in January and think of that as the tool that can protect the LLM and also protect the LLM from the outside world. So we can do things like understand what the guardrails are and continually do things like pen tests to make sure that we can’t jailbreak the LLM and misuse it in any way.
Patrick Moorhead: Right.
Kevin Wollenweber: And then we can also give IT professionals visibility into what AI applications are being run in their network infrastructure. Are they using cloud APIs? Are they potentially leaking data and leaking information outside that are then being used to train these models, which is what a lot of enterprises don’t want to see. And most of them just lack visibility into the rogue AI applications that are being used inside of IT.
Patrick Moorhead: So Cisco is a company that rarely makes an announcement and then doesn’t deliver for like one or two years. So you make announcements that are closer to when you can actually deliver that. So you’ve got a lot of conversations with your customers and your channel partners. I’m curious, what’s the read, what’s their feedback been on this?
Kevin Wollenweber: Yeah, people want access to these technologies immediately. The good news is we have been working on these for a while. That engineering partnership that we announced a year ago was when we started developing a lot of this stuff. And so what you’re seeing is now very, very real. The AI work, or, sorry, the software work that we have to do on the switches and the networking that we build today to integrate in with Spectrum X. It’s software work that’s happening on those devices now. So the hardware we’re already shipping is capable of it. And we’re working with Nvidia on a software update on our side and their side that’ll enable these to work together. Think of that as coming in the summer timeframe and we’re showing it to customers now. We’re actually doing demos of some of the stuff here. And then we also announced that we’ll be taking some of the Spectrum hardware and building Nexus based switches based on that. So if there’s value in a feature or a functionality that sits inside Spectrum, we can bring that into the architecture for the back end, but still have that same operations and management sitting on top, giving that customer the easy button to go and deploy AI workloads without having to relearn new technologies and new operational paradigms.
Patrick Moorhead: Yeah. So I kind of want you to tie a bow on the net benefits of the relationship. I know we’ve been kind of sprinkling them throughout the conversation, but you know, lay it out. I’m a core value prop guy. I’m a recovery product guy. So the core value prop to your customers is what of this alliance?
Kevin Wollenweber: Well, good. We’re all, we’re all recovering product guys and I love, I love the idea.
Patrick Moorhead: You’re still a product guy.
Kevin Wollenweber: So I love doing this stuff. Think about it in two ways. One is value to our customers is they get simpler, easier to use architectures that they can go and deploy that have consistency with what they’re deploying today. They now have silicon choice. So when we think about all the craziness in the supply chain and having diverse supply chains, they can deploy Cisco silicon and the Nexus platform or the Spectrum silicon. When we build that switch and do it in a way that they get consistent features, functionality and capabilities and then they can deploy the Cisco technologies they’ve been deploying on the front end and operate and manage it simpler. So that’s the customer value. But in terms of the Nvidia and Cisco value, Cisco has made a business out of our ability to connect with the channel, partner and end customer ecosystem. Nvidia is a phenomenal engineering company, but they’re really built around how do I build amazing products? Hardware, but really software. If you saw Jensen’s keynote, he talked a ton about the Cuda libraries and all the different things that they built. We now have the ability to take that, take our channel and partner ecosystem and really amplify that and push that out to this new end customer base, which is service providers and enterprises.
Patrick Moorhead: Yeah. So I want to wrap this conversation up by talking a little bit about the future. Not asking about your future roadmap, but you can spill if you’d like. But no more directionally. What should we expect with either this relationship between the two companies or in general Cisco AI?
Kevin Wollenweber: Yeah, well, first of all, the agreement that we signed is a multi year engineering agreement. So it’s not just the set of products we’re launching now, it’s an agreement that will continue to evolve and develop with the 100 terabyte silicon that’s coming and 200 terabyte silicon. And so you’ll see a more natural evolution of our engineering products coming together, congestion management, all the things that are problematic in an AI network. We’ll work on and work on solutions together. But I actually really like how Jensen calls out the kind of evolution of AI. And you know, we started with the generative AI stuff we’re in the middle of now. We’ve got this move to agentic AI which is going to change the way we think about how these networks have to work. And then we’ve got this thing that’s probably a few years out, but with physical AI where AI is going to step outside of the boundaries of the data center. And once it does that, networking and security and the ability to build solutions that can be easily deployed are even more critical. And that’s something that Cisco does really well. So I think we’re super excited about the partnership and the stuff we launch now and we’re really looking forward to how we can continue to evolve this and just build great solutions that help our customers adopt AI technology.
Patrick Moorhead: Yeah, I’m super excited about the potential opportunity in the Edge. And by the way, years ago we called IT industry 4.0, the industrial IOT. And I’ve thought a lot about, well, why is it going to work this time? And I fundamentally believe that we had all this data out there, but we didn’t have the capability to do as much with it.
Kevin Wollenweber: Exactly.
Patrick Moorhead: Quite frankly, we had to ship too much of it in one direction up, to get something done as opposed to, let’s say, shorter hops out there on the edge. And I also think we lack the common tool sets to make this work. And through history, distribution to the Edge only works if you can manage it. And even the improvements in the tools have become important. So I’m an optimist about the edge. It sounds like Cisco is, and you are too, and you’re putting investments not only organically, but also with your partners to make that happen.
Kevin Wollenweber: 100%.
Patrick Moorhead: Well, it sounds good. So, Kevin, this is great. It seems like every few months you are adding, you know, to the announcements and actually getting a lot of work done at the same time. So hopefully we can check in and see how things are going.
Kevin Wollenweber: Anytime. I love doing these and just reach out and I’m happy to do this.
Patrick Moorhead: Thanks, Kevin. I appreciate that.
Kevin Wollenweber: Thank you.
Patrick Moorhead: This is Patrick Moorhead in the Cisco booth at GTC 2025. We are talking about simplifying AI for the enterprise. Listen, hyperscalers are one thing. Enterprise and smaller CSPs, it’s a totally different piece. You have to have the infrastructure in place. Sure, we’re talking about GPUs, a little bit of CPU compute, a little bit of storage, but networking is critical to figuring it out. And oh, by the way, you need to secure that entire data estate. And every time that data moves anywhere, any hop out there, you need to be instrumented to make sure that you’re reducing that threat. So hit that subscribe button. Check out all the Cisco content here on the Six Five and also on GTC. We’ve had some incredible conversations here. Take care. Hit that subscribe button.
Author Information
Six Five Media is a joint venture of two top-ranked analyst firms, The Futurum Group and Moor Insights & Strategy. Six Five provides high-quality, insightful, and credible analyses of the tech landscape in video format. Our team of analysts sit with the world’s most respected leaders and professionals to discuss all things technology with a focus on digital transformation and innovation.