In this vignette of The 5G Factor, Ron Westfall and Steve Clint Wheelock provide their perspective on the 5G ecosystem dimensions of the NVIDIA and Softbank collaboration on a platform for generative AI and 5G/6G apps based on the NVIDIA GH200 Grace Hopper Superchip.
The discussion spotlighted:
NVIDIA Softbank Collaborate on Gen AI and 5G/6G Apps. NVIDIA and Softbank are collaborating on a platform for generative AI and 5G/6G apps based on the NVIDIA GH200 Grace Hopper Superchip in support of Softbank’s plans to roll out at new, distributed AI data centers across Japan. The platform will use the new NVIDIA MGX reference architecture with Arm Neoverse-based GH200 Superchips, and is expected to improve performance, scalability and resource use of application workloads. We evaluate the prospects of the NVIDIA MGX architecture in moving the mobile ecosystem to scale and broaden adoption of a wide array of apps such as AI, HPC, and NVIDIA Omniverse.
Watch The 5G Factor show here:
Or, you can watch the full episode here, and while you’re there, subscribe to our YouTube channel.
Listen to the full episode here:
If you’ve not yet subscribed to The 5G Factor, hit the ‘subscribe’ button while you’re there and you won’t miss an episode.
Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.
Transcript:
Ron Westfall: And with that, Clint, did you see any other key developments that demonstrate AI and 5G are coming together and making a big impact?
Clint Wheelock: Well, I mean, I tell you it’s hard to avoid the intersection of AI and 5G just like it’s hard to avoid the intersection of AI and anything in the market right now. So in any case, absolutely, I mean there are a number of stories we could pick from, but I thought one of the more interesting ones was the newsroom, Nvidia and SoftBank about their collaboration on powering SoftBank’s data center build using Nvidia Grace Hopper Superchip, which is utilized for generative AI and 5G and I think ultimately for 6G as well. And Ron, I know you’ve been following this story too and very interested in your thoughts, but a couple of initial impressions on my end and then it’d be great to kind of dive into your perspective too, but I know SoftBank has been building data centers that post generative AI and wireless application on a multi-tenant common server platform all across Japan with the goal of reducing costs and improving the fabric wide energy efficiency. And this all includes a top priority of continuing to advance SoftBank’s infrastructure to attain some greater performance using AI, including optimization of the radio access networks themselves.
And in addition, SoftBank has signaled that they’re expecting AI to help reduce the energy consumption, going back to our earlier theme, and generate a network of interconnected data centers that could be used to share resources and host a really rapidly expanding range of different generative AI implications.
So I know you took a look at this one too, Ron. I mean, what are your thoughts? What do you think are some of the key implications with the Nvidia and SoftBank news here?
Ron Westfall: Yeah, right on Clint. And I think first of all, we can anticipate that what SoftBank is doing is going to be emulated in other parts of the world. And when you drill down here, what we’re seeing is that the platform will use Nvidia MGX reference architecture that is leveraging the Arm Neoverse based GH 200 Superchips. And this is designed specifically to improve performance as well as scalability and the use of resources for emerging application workloads.
Now when you look at specifically Nvidia Grace Hopper, along with the Nvidia BlueField 3 data processing units, they are designed to accelerate the software defined 5G vRAN as well as generative AI applications without having to use be bespoke hardware accelerators or specialized 5G CPUs. And in addition, the Nvidia Spectrum Ethernet switch with BlueField 3 is set to deliver more precise timing protocols for 5G implementations. So as a result, the solution is designed to improve 5G speed on an Nvidia accelerated 1U MGX based server design, which is natural, but also it can deliver throughput 36 gigabits downlink capacity. And what this means is that Nvidia is asserting competitive differentiation. Naturally, the shot at CPU is targeted at players like Intel and AMD. But what they’re claiming is that when you look at the available data across 5G accelerators, they are asserting that they can now come out with a competitive advantage at this juncture. But stay tuned, we know that this is a back and forth battle. So this is just a way for Nvidia trying to gain more attention for its 5G proposition.
However, from my view, I believe operators have struggled somewhat in delivering high speed downlink capacity using at least today’s industry standard servers. So now we’re seeing again, advance in server design to company advance in design and architecture and chips themselves and of course the systems that are implementing them. So again, this is pointing to the 5G realm becoming more interesting here in the near future.
And I think one thing that’s also important to note is that Nvidia MGX is a modular reference architecture athat is aimed at enabling system manufacturers and hyperscale customers to build over 100 different server variations to suit their needs for AI, HPC and Nvidia Omniverse applications. So this is basically catering to reality. We know it’s complex out there, there are many different vendor supplied solutions, but that I think is intriguing that Nvidia designed something that’s specifically adapted to being able to run over any server implementation. So we’ll see how that plays out naturally. That’s something I think will gain some attention.
Also by incorporating Nvidia’s aerial software for cloud native 5G networks, the 5G based stations can allow operators to dynamically allocate compute resources. And what that means that can potentially achieve power efficiency gains of twice, doubling basically power efficiency gains. And so naturally we know that there’s sustainability initiatives out there as well as even sustainability mandates. And this is something obviously the operators as well as the cloud providers and everybody else in the 5G ecosystem, keeping a close eye improving 5G performance, but also meeting sustainability goals.





