In this episode of The 5G Factor, our series that focuses on all things 5G, the IoT, and the 5G ecosystem as a whole, we look at the recent key AI RAN and telco AI moves, especially alliance formations unveiled before and during Mobile World Congress 2024. The major AI RAN and Telco AI developments include the near-term and long-term ecosystem impact of the AI-RAN Alliance launch, HEAVY.AI’s progress in the key GenAI areas of accuracy and speed, the official debut of the Global Telco AI Alliance (GTAA), and the AI Alliance’s focus on responsible AI.
Our analytical review focused on:
AI-RAN Alliance Debuts: How Substantive? The AI-RAN Alliance debuted at MWC 2024, a new collaborative initiative aimed at integrating AI into cellular technology to further advance RAN technology and mobile networks. The alliance’s founding members include AWS, Arm, DeepSig, Ericsson, Microsoft, Nokia, Northeastern University, NVIDIA, Samsung Electronics, SoftBank and T-Mobile. The group’s mission is to enhance mobile network efficiency, reduce power consumption, and retrofit existing infrastructure, setting the stage for potentially unlocking new economic opportunities for telecommunications companies with AI, facilitated by 5G and further out 6G. We examine how network operators in the alliance are set to spearhead the testing and implementation of these evolving technologies developed through the collective research efforts of the member companies and universities and ecosystem prospects for AI RAN for both near-term and long-term.
HEAVY.AI Showing Accuracy and Speed Breakthroughs. HEAVY.AI announced HeavyIQ designed to bring LLM capabilities to the GPU-accelerated HEAVY.AI analytics platform, with the goal of enabling organizations to interact with their data through conversational analytics. Users can explore their data with natural language questions and generate advanced visualizations of that data. This streamlined process could reduce the friction of traditional business analytics, allowing more users to swiftly uncover insights. With HeavyIQ, HEAVY.AI has taken an open-source LLM foundation model and extensively trained it to perform core analytics tasks, including analyzing massive geospatial and temporal data sets. The technology employs LLM in conjunction with retrieval augmented generation (RAG) capabilities to take a user’s text input, automatically convert it into a SQL query, and can both visualize and return natural language summaries of results. We review why HEAVY.AI is providing benefits in areas such as accuracy. Trained with over 60,000 custom training pairs, benchmarks show HeavyIQ to be more accurate than GPT-4, with 90%+ accuracy on common text-to-SQL benchmarks, compared to 85% with GPT-4. For speed, HeavyIQ leverages optimized and fine-tuned smaller models that take advantage of the latest NVIDIA GPU hardware innovations to deliver responses up to 10x faster than GPT-4.
Global Telco AI Alliance Officially Takes Off. Deutsche Telekom, e& Group, Singtel, SoftBank and SK Telecom officially launched the Global Telco AI Alliance (GTAA), following its pre-announcement in July 2023. The telcos also announced plans to establish a joint venture, through which the companies plan to develop LLMs specifically tailored for the requirements of telecommunications companies. The LLMs will be designed to help telcos improve their customer interactions through digital assistants and chatbots. We explore why it is important to the mobile ecosystem for the JV to develop multilingual LLMs optimized for languages including Korean, English, German, Arabic and Japanese, with plans for additional languages to be agreed among the founding members and advancing telco specific LLMs that align to the telecommunications domain and can prove better at understanding user intent.
AI Alliance Prioritizes Responsible AI. In December 2023, IBM and Meta launched the AI Alliance in collaboration with over 50 founding members and collaborators. The AI Alliance is focused on cultivating an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, and economic competitiveness. By bringing together developers, academics, scientists, and other innovators, the alliance looks to pool resources and knowledge to address safety issues. We assess why the AI Alliance is integral to fulfilling ecosystem-wide objectives including deploying benchmarks and evaluation tools and standards to enable the responsible development and use of AI systems (i.e., Responsible AI). This includes supporting AI skills building and exploratory research.
Watch The 5G Factor show here:
Listen to the audio here:
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Transcript:
Ron Westfall: Hello and welcome everyone to The 5G Factor. I’m Ron Westfall, Research Director here at The Futurum Group, and I’m joined here today by my distinguished colleague, Tom Hollingsworth, the networking nerd and event lead at Tech Field Day here at The Futurum Group. And we will be focusing on the major 5G ecosystem developments that have caught our eye. And today we’ll be reviewing basically the inception of AI-RAN, specifically the AI-RAN Alliance and other important AI related alliance developments. And so with that, Tom, welcome again to our show. This is basically a sequel to your debut from just the other week. And how have you been bearing up between episodes? What is going on?
Tom Hollingsworth: It is just been a busy couple of weeks. I’ve been working on our next mobility field day event, including some enterprise wireless companies and some private 5G companies. It’s kept me pretty busy working on that, but the good news is that I obviously didn’t screw up enough on the first episode, so you’re like, “Let’s get him back and see what he has to say on this.”
Ron Westfall: Exactly. And that is a thumbs up. And yeah, I appreciate that plug for Mobility Tech Field Day. I’m definitely looking forward to it. I’ll be there, I’ll be participating as a delegate. And yes, I think it’s going to be a really good one, and we’re looking to diversify somewhat the participants. And stay tuned. We will definitely be addressing this before May 15th, 16th, the two main days of that key event. And so with that taken care of, let’s dive right in. MWC 2024, we saw the unveiling of the A-RAN Alliance, which is a new collaborative initiative that is aimed at integrating AI into cellular technology to further advance RAN technology and mobile networks. And this is begging the question as to whether AI-RAN is a real deal or is it AI-RAN so far away to paraphrase the 80s new wave poets, A Flock of Seagulls.
And so with that in mind, let’s look at what’s real and what’s not with the AI-RAN Alliance and specifically AI-RAN. Now bringing together the technology industry and academic institutions, the alliance’s founding members, I believe, are off to a good start because it includes heavyweights like AWS, ARM, DeepSig, Ericsson, Microsoft, Nokia, Northeastern University, NVIDIA, Samsung Electronics, SoftBank and T-Mobile. So what’s encouraging here is that it’s a diverse group. It’s not just one set of players, or say it’s only a two organization collaboration type of initiative. And so with that, the group’s mission is to enhance mobile network efficiency, reduce power consumption, very important. I’ll touch on that more, and we will touch on a bit more, and retrofit existing infrastructure to set the stage for unlocking new economic opportunities for the telecommunications companies, with AI facilitated and augmented by 5G today, and then further out 6G.
The alliance members will use their expertise and collective leadership to focus on three main areas of research and innovation. The first one is AI for RAN, and that is advancing RAN capabilities through AI to improve spectral efficiency. Second is AI and RAN, integrating AI and RAN processes to use infrastructure more effectively, and generate new AI driven revenue opportunities. And third is AI on RAN, and that is deploying AI services at the network edge, through RAN, to increase operational efficiency and offer new services to mobile users. And I don’t think we expect people to memorize that three prong approach, but it’s important to know what is the focus and what is going to be moving the needle, hopefully, at least by later this year, if not a bit further out. Now, the network operators in the alliance, I see, are set to spearhead the testing and implementation of these evolving technologies, developed through, again, the collective research efforts of the member companies as well as the universities.
With that, the AI-RAN Alliance is invoking how AI, alongside digital twin optimization, can be harnessed for mobile networks and the transformation of the telecom industry. And we see digital twins definitely playing an integral role here. We certainly saw, for example, during the GLOMO judging process, a lot of the key innovation was occurring on the digital twin side. And it’s not unique to telecom industry, naturally, but we can already bet that it’s going to play a major role. And here are a couple of reasons why. First of all, it brings to mind the heavy AI framework, built on NVIDIA Omniverse, that is aimed at optimizing wireless site placements to reduce the cost and complexity of network operations and improved naturally the customer experience. Now advanced data analytics is playing a key role as heavy AI today offer solutions, such as heavy RF, which is aimed at delivering network planning and operations tools based on the NVIDIA Omniverse platform, which is really done for creating digital twins.
Now, as a result, network operators, engineers, and data scientists can analyze and visualize complex telco data records at scale with heavy AI or HEAVY.AI. Now this includes running millions of complex calculations aimed at reducing interference between the innumerable towers of 5G networks with existing network data sets, scenario models and coverage data, and also operators such as Canada’s Telus and Chile’s Intel are already on board with this approach. And plus, just this month they announced Heavy IQ, which is designed to bring LLM capabilities to the GPU accelerated HEAVY.AI analytics platform, with the goal of enabling organizations to interact with their data through conversational analytics. And this is allowing users to, again, explore their data with natural language questions and generate advanced visualization of that data, which I think is also important that we’re not just talking about large language models but also looking at objects, looking at visualization, and so forth.
And so this, I think, is going to be increasingly critical. And the streamlined process can or potentially reduce the friction of traditional business analytics, allowing users to quite simply uncover insights more quickly. Also with Heavy IQ, HEAVY.AI has taken an open source LLM foundation model and heavily trained it to excel at core analytical tasks, including analyzing massive geospatial and temporal data sets. The technology employs the power of an LLM in conjunction with retrieval augmented generation or quite simply RAG. I think that term has really picked up a lot of momentum. And these capabilities will take a user’s text input, automatically convert it into a SQL query and can both visualize and return natural language summaries as a result.
Now, one major benefit that is being claimed is in the area of accuracy, and trained with over 60,000 custom training pairs, Heavy IQ is ready to deliver what I believe is breakthrough accuracy. And how do we know? Benchmarks are showing that Heavy IQ to be more accurate than GPT-4 with 90% plus accuracy on common text to SQL benchmarks compared to 85% with GPT-4. And for speed, Heavy IQ is using optimized and fine tune smaller models that take advantage of the latest NVIDIA GPU hardware innovations, which were clearly introduced with much fanfare at the most recent NVIDIA AI Woodstock event, as it’s being dubbed. And it’s also as a result, enabling responses that are up to 10 times faster than GPT-4. And so really the LLM’s arm race is gaining momentum. And with all that, Tom, from your perspective, what are the key takeaways from the AI-RAN initiative, and also what’s going on with the alliance?
Tom Hollingsworth: I think they’ve got the right idea here that they’re trying to use AI capabilities to work within a finite resource. You’re not going to manufacture bandwidth or spectrum out of nowhere. And so in order to be able to use what you’ve got more effectively, what you have to do is you have to figure out the best way to get the packets from where they start to where they end. And I think that the digital twin technology is probably the key here, if you want to think about it that way. If you’re not familiar with the digital twin, my friend, Mike Lossmann over at Forward Networks, has the best explanation for digital twin ever, because you use one every day when you open up the maps application on your phone. It is a virtualized representation of a real world that is the road system that we use. And what do we have capabilities now in that real world road system?
Well before it would just tell me, “Go this route,” but now it can be augmented with additional analytics data to say, “Oh, there’s a traffic problem here. Do you want to reroute and go this way, which is a little bit faster?” What it does is it takes that digital representation, and it takes that data and allows you to adjust so that there’s less congestion. Because not only is sending you on that different route, going to make your trip better, better user experience, it’s also going to reduce the congestion in that area to help clear it faster. Apply that to limited spectrum. We have seen an explosion of applications now that run on mobile devices that we have access to 5G technology. That’s only going to grow as more of those handsets get deployed as they become basically the default, and as we move into what will eventually become 6G. You have to have a way for your analytics system to be able to figure out the best way to get traffic where it needs to go.
And you can’t do that in real time on the hardware without assuming some form of risk. Anyone who’s ever worked on network devices, and is just like, “Okay, I’ll just type this command in and see what happens,” knows exactly what I’m talking about. There is the possibility that something could go sideways. With a digital twin, you have a virtual representation of a real system that you can poke and prod and do all kinds of things to, and understand, “Well what happens if we bring this backup link online? Or let’s say we’re going to steer 20% of our traffic to this other tower over here.” That is powerful. And the best part is that the users won’t even know. They don’t have to see this. There’s no popup that says, “Ooh, your cellular connection is kind of bad. Would you like to join this local WiFI network?”
No, all of that happens transparently in the background, which means customer satisfaction scores go up. That’s where that AI on RAN capability is headed, in the succinct paragraph that they put down there. They’re going to introduce new services at the edge, that’s what customers want. But those new services are going to consume additional bandwidth, and as users spend more time on their phones, that means networks are going to get more congested. So without the AI capabilities and the AI and RAN and the AI for RAN, that are doing all that stuff in the background, I think that the additional services piece isn’t going to take off as much as the cellular companies would like.
Ron Westfall: And those are excellent insights and automatically generated two thoughts that I think, Tom, are very much all related to this. As we saw, GTC, Jensen Wang, made an important point of pointing out what you just illustrated. And that is a digital twin is more than a simulation. It is truly interactive. It is just like a Google Map application that we use on a daily basis, except it’s being applied on a vast scale, i.e., the telco network. And that requires basically the power of AI enabled training of the LLMs and other capabilities. And so this is something that is not only about, say, spectral efficiency, but also on one other key part is energy efficiency. Because as we’ve seen, if 5G is hypothetically deployed in the same manner as LTE, that would actually generate a threefold demand and power requirements. And clearly the operators are wanting to avoid that. And I think they’re making progress in that area.
To your point about workloads coming on and driving up bandwidth demand and so forth, that is pretty much going to be rising for the foreseeable future. And AI workloads are certainly playing a starring role there. And I think that ties into my second point is that digital twins, I believe, can make a huge business impact. In fact, we saw last November in Australia, the Optus network meltdown and what happened? Well, it was what you were using as an example, typing in a command and it went sideways, and boom, it resulted in a lot of fallout, including this ultimate resignation of the CEO. Now, if the digital twin had been in place, it was a very strong likelihood that would have been avoided. And I think we’re seeing that in places like China where we’re not seeing those type of meltdowns, at least the odds are improving dramatically.
If the telcos have a digital twin in place, they can avoid severe shortages and all the other issues that could come up without a digital twin. And that’s been ongoing up through at least the end of 2023. So hopefully in 2024 there will be less of that kind of thing. And since we’re talking about, again, telco AI related alliances, now let’s turn to the GTAA. And so that is asking the question, who is the GTAA? Well at the show, Deutsche Telekom, e& Group, Singtel, SoftBank, and SK Telecom officially launched the Global Telco AI Alliance or GTAA. Now the GTAA was pre-announced in July of 23. And so this is good news because it’s now a going concern. They’re really actually going to be executing on their blueprint, their vision now. And so during the launch event, the telcos also announced plans to establish a joint venture through which the companies will develop LLMs, specifically tailored for the requirements of telecommunications companies.
Now, the LLMs will be designed to help telcos improve their customer interactions through digital assistance and chatbots. And they had, clearly, a starring role at the MWC 24 gathering. That is where we saw the most demos. That is where we had a lot of the conversation. But that’s not to distract us, at least, from what’s going on with AI ML capabilities that have been around for many years, and they’re also being improved. But the twain, I believe, will come together more and more, and we’ll address this on our talking points right now. Now, the partners noted that the main goal of the JV is to develop multilingual LLMs that are optimized for languages, including Korean, English, German, Arabic, and Japanese, with plans for additional languages agreed among the founding members.
Now compared to general LLMs, telco specific LLMs, we see, are going to be required, so that the telecommunications domain is better at understanding user intent and also what partners are requesting and so forth. So clearly, it’s a value chain dimension here. It’s something that the ecosystem has to come up and meet the challenge here. And I will certainly be interested in what other language will be coming down the line. But I think it’s also showing that in order for AI to reach its full potential, at least in telco space, you have to have language specific LLMs, or naturally you’re not going to be getting the benefits across major countries, across major regions and so forth. And clearly the 5G ecosystem can ill afford that. And so with that, Tom, what are your thoughts on GTAA and telcos taking LLM development and training into their own hands?
Tom Hollingsworth: I think it’s important for them to be able to increase their customer satisfaction scores. And let’s be fair, chatbot is kind of like, if you think of the crawl, walk, run methodology of building things, crawl is most definitely the chatbot application of AI. And this tracks with something that my friend and partner, Steven Foskett, at Gestalt Tech Field Day figured out about AI last year. Initially it was the hammer for every nail that you had, and then it turns out that’s actually not a good case for it. But one thing that it is really good at is translating, it can take an input and it can translate it into a different language. Look at who is involved in this alliance? You’ve got Deutsche Telekom. You’ve got SK… What was it? SK Telecom. You’ve got SoftBank. I promise you that SoftBank does not “Sprechen sie Deutsch” and SK Telecom, “Karera wa nihongo o rikai shite imasen”, they do not understand Japanese.
So one of the things that they’re going to have to do is they’re going to have to figure out how to open this up to each other. And that is the power of what an LLM that has been properly trained can do. A user in Japan can enter a query in their native language that can then be translated into a different language to query support docs in a different country, return a result, and retranslate that back into the native language. And you and I both know when it comes to translating anything, you lose a little bit of the nuance in it. Now imagine trying to translate support docs, that’s even worse. And so by keeping the original in the native language, you can continually rely on that without making a copy of a copy of a copy. And so that means that customer satisfaction scores and employee satisfaction scores are going to continue to go up. Because I know that in my old days as a network engineer, man, I would tear my hair out anytime I couldn’t find a really good reference for something.
And that was when I was working on American equipment in America with American customers. When you start spreading this globally, it is a huge issue that you have to deal with. And when you consider that the majority of the RAN technologies that are being developed are truly global. I mean Ericsson, Huawei, Nokia. Those are three companies that are based in three different countries. And so you’re going to have to figure out how to make all this work. And that’s one of the ways that an LLM can tie all that together. And I think that as that chatbot mentality fills itself out, as you see the software prove that it is capable of doing that thing, that is a bedrock that you can build on to enhance more capabilities. Now, I think that the difference between the two that we talked about, the AI-RAN Alliance seems to be very focused on the infrastructure improving it.
It feels like the GTAA is very customer focused. We’re going to implement AI capabilities to improve our customer retention, so that we aren’t constantly losing customers to our competitors when something happens, or there’s a billing issue or something like that. So I think that it feels wildly divergent now, and I’m curious to see which one wins out. Does improving your network performance and having middling support capabilities keep your users happy, or is having the same tried and true technology that you’ve had so far, but making the customer support experience better, something that customers are really interested in having?
Ron Westfall: Yes, excellent insights. And I think it brings out another important point that has gotten some attention, but not to the same level as customer satisfaction, customer experience, and that is workforce satisfaction, workforce experience. And I think that is a very good segue into our next topic, which is that in addition to these two alliances literally debuting at, or at least officially at NWC 24, is that we also saw, coming out for the AI Alliance, just AI Alliance, in December of 23. And this is IBM and Meta launching it in collaboration with 50 other founding members and collaborators. And basically it’s a who’s who of major software suppliers and equipment providers and so forth,, along with academic institutions, et cetera. Now the AI Alliance is focused on cultivating an open community and enabling developers and researchers to really accelerate responsible innovation in AI while ensuring scientific rigor, trust and economic competitiveness.
Now by bringing together developers, academic scientists, and other innovators, I anticipate the alliance will look to pool resources and knowledge to address safety issues. And I think this is going to be increasingly important across the board, but certainly the telecom industry is no exception. That is, it can be characterized simply as responsible AI, that is minimizing, for example, hallucinations or drift, as well as building in ethical safeguards, so we’re not generating quite simply information that is completely inaccurate or is unethical, et cetera. Now, the key objectives include deploying benchmarks and evaluation tools as well as standards to enable, again, that responsible development and the use of AI systems. And this also includes AI skills building and exploratory research. And while there were no telcos listed among the founding members, this does bring to mind that the GSMA and IBM, one of the founding members, unveiled their collaboration and the lead up to the show to support the adoption and skills of AI in the telecom industry through the launch of GSMA Advance’s AI training program, and the GSMA foundry generative AI program.
Now, the AI training program, the first in a new series of courses by GSMA Advance, is looking to prepare telco decision makers for the AI era and the bridge the skills gaps across the telecom industry. And that is by equipping members with the skills and knowledge to help leverage GenAI technologies using Watson X, IBM’s AI and data platform, using AI assistance. And I think we’re going to hear more about Watson X during the course of this year. It’s basically, I think, one of the shining stars across the entire AI firmament. Now, for background, IBM’s latest AI adoption index found that 40% of telecoms surveyed are exploring or experimenting with Generative AI, we already touched on that, and 45% have accelerated the rollout of AI itself.
So this is, again, reinforcing the momentum’s there, the investments, the prioritization is there, so expect even more.
Also research from GSMA Intelligence, always an outstanding source of information, also shows that while 56% of operators surveyed are actively training Generative AI solutions at a rate higher than any other priority technology. This adoption is less prevalent, though, amongst mid-size and smaller operators survey. And this is nothing new. We’ve seen this already, for example, with Open RAN. It’s basically the leading telecom operators, that is the ones with the biggest footprint, are the ones who have the resources to be able to invest and drive the innovation, and thus see how Open RAN can be implemented. And thus the lessons learned, the takeaways, can then be implemented by others across the entire ecosystem. And it’s important that everybody is on the same level because of interoperability, because of standard support and so on. And so with that, Tom, now, what is your viewpoint about the AI Alliance, and what is leaping out and how can it really help telcos specifically?
Tom Hollingsworth: I feel like the AI Alliance grew out of all that stuff that happened during the Open AI leadership crisis, if you want to call it that, that happened. And there was all these weird rumors that had come out that they were building some kind of weird artificial general AI. And let’s be fair, I’ve heard everything. It’s going to become Skynet, it’s going to wipe out the human race, Roko’s basilisk, you name it. I don’t believe that that’s the case. But here’s what I do believe could happen. AI, whether it is some kind of an assistant or a copilot application, that’s kind of assisting us in being able to figure out more efficient ways to run our systems, could say things like, “You have an awful lot of bandwidth dedicated to this circuit over here. If you reduce that a little bit, you could save X number of thousand dollars a year,” without realizing that the reason why the bandwidth is dedicated to that circuit is because it’s an emergency services circuit. That’s for people to dial and hit their PSAP.
You want that to be there. You need that to be there because if not, you’re violating federal law. Ask AT&T how it feels to violate federal law. I would tell you to go look up the case notes yourself, except they’ve been locked because there’s a federal investigation into that right now. So you have to understand that there are certain things that AI will probably suggest that you do that it really shouldn’t do. AI doesn’t know that widows and orphans don’t have a lot of money, and therefore they have to have lifeline services. It’s just going to say, “Oh man, these circuits over here are costing you a small fortune.” Or think about something as simple as powering off services in the middle of the night on the weekend to save money in an edge deployment. Except when do people tend to get hurt and need to look up things? Not during business hours. If you have to Google what are the symptoms of a heart attack at midnight on a Saturday, you don’t want the services to have to spool up for an extra 30 seconds because AI powered them off to save $300 a month in this tower on an electric bill.
So being able to put these ethical considerations into an AI algorithm, from the very beginning, means that you don’t have to train it later. Because these are those kinds of things that you don’t want to have to train through failure. You want them to understand this from the beginning. In fact, you might even want to err them on the side of caution, so that you don’t have to say, “Yeah, you’re cutting this link just a little bit too short. Maybe we need to give it a little bit of headroom in order to keep this from happening.” Because unfortunately, people don’t live and die by percentages. They live and die by the capability of access to emergency services of what now I consider to be a lifeline service in cellular technology.
It used to be that you had to have a home phone. Now most people don’t. But when you look at the fact that they schedule, bank, and do all kinds of other things through a mobile device, that means that the carriers that are offering those services are considered to be critical infrastructure, and having an AI and making suggestions to make them more operationally efficient, that can impact their customers in a negative way that could lead to injury or loss of life, that’s a huge problem. So I would rather have organizations like IBM and Meta sitting out here going, “Let’s make sure that doesn’t happen from the very beginning,” as opposed to having to bolt it on later and run it through an extra series of checks.
Ron Westfall: Exactly. And I couldn’t agree more. In fact, I think it is just that, that we have to have this built in from the get-go. And to your point, Tom, this is something that is really at the beginning of the journey, but the one big difference is that the investment prioritization is already there. This is something that is going to continue to gain momentum, at least through the rest of 2024 and probably into the foreseeable future. This is not going to perhaps have the same type of gyrations as say Open RAN as a near example. And so with that, I think pragmatic example in mind, and that will keep certainly our emergency services up and running during the entire day and night, I thank you again, Tom, for joining our show. And I’m definitely looking forward to having another conversation here as we’re leading up to Mobility Tech Field Day in mid-May.
Tom Hollingsworth: Yeah, I’m really excited for that. Like you said at the top of the show, it’s May 15th and 16th. We have great presenters already lined up, folks like Juniper and Fortinet and Arista, and now Celona, one of the biggest names in private 5G, and we should be adding some more names to that list very shortly. So techfieldday.com is the place to find all of that information.
Ron Westfall: Yes, very exciting. And I know I’m definitely looking forward to it. And to our viewing audience, thank you again for joining us and listening in. And as always, please remember to bookmark us and look to our upcoming 5G Factor webcast. And with that, thank you everyone and have a wonderful 5G Day.
Other insights from The Futurum Group:
5G Factor: Telco GenAI’s Early Market Impact
5G Factor: Key MWC24 Takeaways – The Cloud and Telcos
5G Factor: Key MWC24 Takeaways – Open RAN
Author Information
Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.
He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.
Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.
Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.