DeepSeek’s Ecosystem Implications, Samsung’s Galaxy Unpacked, & More – Six Five Webcast: The 5G Factor

DeepSeek’s Ecosystem Implications, Samsung’s Galaxy Unpacked, & More - Six Five Webcast: The 5G Factor

On this episode of the Six Five Webcast: The 5G Factor, host Ron Westfall, along with analyst Olivier Blanchard, dive into the latest developments in the mobile and AI landscapes. They discuss DeepSeek R1, Galaxy Unpacked, and the prospects of operators like AT&T and Verizon using their real estate and AI infrastructure to monetize AI services are examined.

Their discussion covers:

  • The introduction of DeepSeek’s R1 model, its impact on AI ecosystems, and the geopolitical considerations involving NVIDIA.
  • Samsung’s unveiling of new Galaxy devices aimed at enhancing AI mobile experiences.
  • The potential for operators like AT&T and Verizon to monetize their AI capabilities and real estate assets.

Watch the video below and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five Webcast: The 5G Factor is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Ron Westfall: Good day, everyone. Welcome to The 5G Factor. I’m Ron Westfall, Research Director here at The Futurum Group. And today, it’s a good one. I’m joined by my esteemed colleague Olivier Blanchard, a fellow Research Director and Practice Lead for our AI devices practice here at The Futurum Group. In fact, Olivier just recently returned from The Samsung Unpacked Event. And with that, I think we have some good insights on what’s going on in the 5G and mobile ecosystem realms. And with that, also, we’ll be focusing on just that, the things that have really jumped out of recent vintage that merit our attention. And so with that, Olivier, welcome back to The 5G Factor. How have you been bearing up since the last time you’ve been on The 5G Factor?

Olivier Blanchard: I can’t even remember the last time I was on 5G Factor. I’ve been dodging your invites for a while. Not because I wanted to. I’ve just been on the road it seems like a lot more in the last six months than I have ever. I’ve been doing well. It’s hard to keep track and to keep pace with all the changes in the industry, and the hurricanes, and all of the everything that’s happening all at once. Yeah, it’s good to have a minute to be back and catch up with you.

Ron Westfall: Yes, indeed. Perfectly understandable, and right you are. It’s been a very kinetic last, well, year-plus really, overall. I think our first topic that we’re kicking off has played a major role. Yes, AI, again. However, this time, this is something that I think is really galvanizing you know attention. Hopefully by the time our recording is published, it will still be relevant because matters are moving very quickly. That is just on January 20th, China-based DeepSeek released R1, its reasoning model that basically is outperforming OpenAI’s latest o1 model. That is verified in various third party tests. What is interesting here is apparently 150 researchers at a Chinese hedge fund, also known as DeepSeek, has outflanked the entire work of tens of thousands of engineers, and basically almost the entire Western scientific community. What is this? Just with a handful of modified NVIDIA H800 GPUs. Well, let’s see. Let’s stay tuned on that.

More likely, at least from my perspective, is that DeepSeek was trained on more than 50,000 H100 GPUs. And while that is good, that doesn’t automatically mean that the AI ecosystem apparently has reached a point that can train on materially less infrastructure. That means we still, despite this breakthrough, will probably need to continue our massive commitment to figuring out ways to optimize AI training as well as AI inferencing. That is naturally links to attaining the main objective of artificial general intelligence, or AGI. Now a little more background. There’s a lot of threads here, but what I’m going to focus on initially here to kick off the conversation is that according to Alexander Wang, the CEO of Scale AI … He’s the one that basically initiated the notion that, okay, what is going on here is that the Chinese accessed NVIDIA’s advanced GPUs on a wider scale than many people realized. The reality is that the Chinese labs, that they have more H100s than many people think. He added and shared that understanding is that DeepSeek has about 50,000 H100s. They really can’t talk about it because it’s again the export restrictions that the US has put in place.

As a result, they have many more chips than folks really fully understand, but that’s changed obviously dramatically. Elon Musk also seconded the motion. Now Nvidia, on the other hand, has insisted that DeepSeek is an excellent AI advancement and a perfect example of test time scaling. This is an important technique now that I think will get a lot more attention because of what DeepSeek has basically achieved. In addition, “DeepSeek’s work illustrates how new models can be created using this technique, leveraging widely available models and compute that is fully export control compliant,” emphasized NVIDIA. This means that inferencing requirements that require significant NVIDIA GPUs and high performance networking will still be needed into the foreseeable future. This aligns with what they see as that there are now three scaling laws, pre-training, post-training, and now test time scaling, the newest one on the block.

In effect, NVIDIA is suggesting that the DeepSeek use export compliant equivalent of an AMC gremlin and pacer, and soup them up to the top-line equivalence of Maseratis and Bugattis. I know it’s a loose analogy. But wow, this is pretty remarkable that they only export compliant GPUs to achieve this. Well I think there’s a lot of counter-opinion out there and as a result there’s this mystery. What is the GPU cluster that DeepSeek actually use? I don’t think we’re really going to know for a little while. But this is definitely I think fueling all the speculation. Just how replicable is this? Just how cost-efficient is this really? But if the outcome is okay, great. Now we have AI training that can be done on a much more cost-effective, much more energy-efficient level, then yes. And it’s also open source-based. That’s good news for the rest of the AI ecosystem.

Now officially, DeepSeek claims to have used only about 2048 NVIDIA H800 chips to train are R1 model. That is alongside the 10,000 older generation A100 GPUs they had already attained before the US imposed export controls. Now all this I think is going toward NVIDIA’s credibility on US export policy. Naturally, NVIDIA has to claim that yes, that the chips that were used were export compliant and that’s understandable. However, I think as we dig deeper and if we take the opinion of folks like the CEO of Signal AI, then I think that means that there’s more going on here than can be officially disclosed. The bottom line is that many of actions that are backed by China’s government, I think as we understand, we’ve seen this with Huawei for example. Is that it constitutes what can be characterized as a giant psyop. In this case, China is attempting to create a little bit more uncertainty among the investment and tech community about AI’s prospects in terms of it being primarily driven by US or Western technology know-how.

As you can see, it did effectively shift the spotlight from the half-trillion dollar project Star, which has its own set of major questions. We have been really looking at how can we improve scaling laws and seeking efficiency with AI despite the DeepSeek breakthrough. Again, we’ve been using open source, RAG, fine-tuning and forking capabilities, all these AI techniques to allow smaller models to be more performant. We’ve already seen companies such as IBM, Meta, Mistral and others have stepped up and really have made I think an impact on how this can be better achieved. That is right-sized large language models that are better performant. With that, Olivier, from your perspective, I know you’ve already provided some I think very valuable insight on this. What is your take on what’s going on here? I know this is just one aspect here, but what else do you see going on here?

Olivier Blanchard: Right. Well, that was a lot. I feel like the meme with a guy like this and he’s got the board with all the red strings just connecting everything. I think everything that you said is its own points, including the theory or the hypothesis around Chinese psyop. Although, I would caution that if we equate the AI race between US and China to any other arms race or space race in the past, the prevailing tactic to harm the other side, your opponent, isn’t to necessarily undermine credibility of their model. It’s to get them to spend a lot more resources chasing the same thing, instead of trying to make them efficient. By releasing DeepSeek, China, assuming that the Chinese government is involved in some way in this strategy, in the timing, in the tone and tenor of this release, the objective for China should be to just let the US overspend on infrastructure and AI resources, instead of taking this more efficient, scrappier approach that they took to achieve the same results with fewer resources. I take the notion that China is trying to undermine US markets and US investments in AI with a grain of salt a little bit.

But, having said that, I think most people are actually missing the point with DeepSeek. I don’t know that we’re still be talking about DeepSeek in six months. I think that the larger trend, which is something that we’ve been talking about for the better part of the last year, is that there’s been an evolution. There’s been a trend line in AI training and AI inference that transcends DeepSeek and any other company like it that is very likely to pop up in the next six to 18 months, and it’s this. We’re at a confluence of an increased and a systemic improvement in the efficiency on the training side. On inference as well, but definitely on training. Where models that had to trained on the cloud with massive resources behind them, massive inputs of power, of chip and compute to train these models have become so much more efficient in the last year that what used to be only trainable on the cloud can now be trained on PCs, and in some cases on mobile. We’re seeing these models shrink, and get faster and cheaper to train. Just by virtue of the fact that they’re becoming more efficient.

On the other hand, we have this other element which is the chips are getting more high performance as well. Which is the whole value proposition between Nvidia’s Blackwell. You can use fewer GPUs to generate more outputs faster and more cheaply. So there might be a slightly more upfront cost on the GPU side, but you need fewer of them to achieve the same result. Less power, less water, all of these other resources. Less footprints data center-wide. We have these two efficiencies basically working together to make AI training a lot cheaper and a lot faster. This was happening already, it is going to continue to happen. It’s one of the things that is fueling the proliferation of AI processors and AI models from the cloud to the edge, and creating this more hybrid ecosystem, of basically cloud to endpoint AI training and inference, which I think is the reality that we’ll be existing in in as early as later this year. We’ll touch on that again when we get back to our coverage of the Samsung event last week, because there’s actually an element that plays into this.

The issue with Stargate and the enormous investment numbers that we were talking about a week ago, and this injection of excitement in AI infrastructure specifically in the United States seemed a little bit, I don’t know, warped last week. When the announcement was made, the impression that I had, given what we just talked about and validated by the DeepSeek announcement a few days ago, and the sell-off of US tech stocks in the last 72 hours, is this. This is already happening. We don’t need to spend trillions of dollars building massive data centers that are going to house millions of chips, and servers, and racks, and require nuclear power plants, and some kind of weird re-engineering of our water supply and our water management systems. We don’t need to completely just stop everything that we’re doing and build these massive projects because the models and the chips are becoming more efficient. This federated, distributed AI training and inference is already happening and the costs are getting lower.

So I think that what we’re seeing last week is … Let’s go back two weeks ago. Two weeks ago at CES, Jensen introduced his vision for the next phase of what NVIDIA’s about. Now we’re going to talk about NVIDIA because NVIDIA has the market power and the best position in terms of GPU infrastructure and IP when it comes to training AI at scale. His model at CES, the introduction of Blackwell, all the things that he wanted to do, what he talked about in terms of training for automotive, training for industries, doing all these virtual digital doubles and digital twins of the world and different environments is still valid. None of that has changed. The 2025 spend on this and the reality of where technology is today in 2025 with 2025 chips, on ’25 models, the entire 2025 current layer of AI solutions and AI IP is absolutely not where we will be in 2028 and in 2030. I think that Stargates misses the point in that it looks like the type of spend that we would put together if we assumed that these efficiency advancements, these efficiency improvements were to stop this year. We’re looking at builds for 2030 with budgets that reflect where we are today in 2025, not where we will be in 2028, 2030, 2035.

I think that they’re a little bit bloated. I think it’s warped expectations. I think the front end investments may be accurate, it may be the right number dollar-wise, but I think it may be over-indexed on the data center side is what I’m saying. I think that taking a more holistic approach to the entire ecosystem of research, of chips, all up and down the AI value chain from basically wearables all the way up to the data center is a more realistic approach. I think that we’re going to end up spending a lot less on data centers and a lot more on production, on supply chain, and on all of those middle layers to create a more hybrid, ubiquitous AI ecosystem. Deep6’s announcements, and obviously the reaction to the market which I think was an overreaction but that’s a whole other story, is just one of several proof points along the way that the increasing, the rapidly increasing efficiency of AI training, and AI inference workloads for that matter, and the diminishing curve of costs associated with that is going to be a disruptive force in some of the calculations that we’ve had about how many chips we’re going to need to be able to power this new AI economy.

That’s not necessarily a bad thing for NVIDIA and for AMD, and for everybody else whose involved with this. But we should perhaps adjust these massive numbers that we’ve been looking at and look at it more in terms of diversification of chips. If you’re an NVIDIA for instance, or if you’re looking at NVIDIA in your investments … I’m not giving an investment advice, so don’t take that for this. If you’re looking only at NVIDIA for its data center GPUs, you’re missing the point. You need to look at NVIDIA all up and down the value chain, from PCs, and devices, and IOT, all the way up to the data center. If you’re looking at other companies like Qualcomm, or Intel, or AMD, or MediaTek for instance, you also need to look at those device layers and those intermediate layers where there’s going to be a lot of scale to deliver AI organically through this hybrid model. If that makes any sense, I know there’s a lot.

Ron Westfall: Yeah. This topic warrants that. I think we’re only going to scratch the surface. But I think outstanding viewpoints there that you shared, Olivier. First of all, let’s look at the big picture here. I agree. This is not an existential threat to NVIDIA a by any stretch of the imagination. And yes, their portfolio has diversified over the last few years. It’s no longer about only GPUs so to speak, or at least there’s that perception. It’s the software and the services. It’s definitely been diversified. Yeah, I think one important takeaway from CES among many was what NVIDIA’s doing with Cosmos. That is taking real world video and applying it to AI training and capabilities so that you have these outcomes that can be very useful in just that, real world settings. How to optimize, say what’s going on in a warehouse as an example, and advancing robotics, and so forth. A lot is going on here.

I think what is going on with DeepSeek, and for that matter Project Stargate, I think that’s important to note here. It’s like the frothing of the AI hype cycle here. To your point, I think what about the half-trillion dollars that’s been allocated toward Project Stargate. What are the implications of a lot more efficient AI capabilities throughout the AI ecosystem? This is a variation of Jevons Paradox. That is if you make something a lot more efficient, that will actually increase more demand for it because it becomes more affordable and more broadly available. To your point, yes, when it comes to hybrid AI, okay, a lot of the heavy lifting AI training that’s done in these GPU clusters in data centers throughout, say hyperscaler networks. A lot of this can now be done in a more distributed basis in edge data centers, and so forth. But also, to your point about devices, yes, the AI inferencing let alone some of the training can now be done at the outer edge, and that includes devices that are powered by Qualcomm, ARM, MediaTek, and Apple. As well as across the broader edge, including Broadcom, Marvell, AMD, you name it. That okay, this doesn’t necessarily mean you have to enlist NVIDIA GPUs to do this more distributed edge training and inferencing, but it’s really I guess just that. Welcome news for the entire AI ecosystem, as well as for the other chip players out there.

On that note, let’s now segue to the next topic, which is really AI on the device level. As we saw at Samsung Unpacked, which you were able to join, is that Samsung announced the Galaxy S25 Ultra, it’s Galaxy S25 Plus, and Galaxy S25 all aims at setting a new standard for true AI companionship with basically our context to where our mobile experience is. I think that’s a very poignant way to position this and market this at the onset. What we saw is that Samsung introduced multimodal AI agents, and as a result the Galaxy S25 series is really a first step in Samsung’s vision to change the way users interact with their phone. And also, with the real world for that matter. It’s using a customized Snapdragon Elite mobile platform to attain a lot of these goals. And also, the Galaxy chip set is designed to deliver greater on-device processing power for Galaxy AI, and improved camera range, and so forth. Let alone Galaxy’s ProVisual Engine. I’m going to stop there because you were there. You have, I know, in-depth perspective on this. What were some of your key takeaways from what Samsung announced with the new Galaxy S25 line?

Olivier Blanchard: Right. It was a really interesting event. Of all the Samsung Unpacked events that I could have gone to, this was probably one of the more significant. I think somehow, a lot of that significance got missed by the coverage of the event. Let me give you my take on this and why I think it’s so important. I’ve been operating under the assumption as an analyst and as a practitioner of where AI is going and how, I’ve been operating under the assumption that ultimately what we want is ubiquitous AI. Device-agnostic AI where you walk into a room, it doesn’t matter where you are, the device that is nearest you and most capable of delivering agentic AI experience that as a user you’re expecting and doing it more efficiently is going to be the one to deliver it. If I’m talking to an assistant and prompting it and saying, “I wonder what the weather is today.” Or, “What are my appointments this morning?” It doesn’t matter if it’s my watch, my headphones, my smart glasses, my PC, my smart speaker, whatever it is, that’s where it’s going to go. With agentic AI specifically, one of the really interesting user experience advantages or value propositions is this hyper-personalization where your agents learn from you. They learn from your habits, they learn from your needs, from your patterns. They become like your best friend. They know what you want and in what format, in what style and what speed before you know it, but definitely when you prompt it. They can anticipate your needs and adjust your environment, your workspace, your calendar, your shopping planning, all of it for you.

However, what that requires is a lot of device-to-cloud integration. I’m not going to get into the specifics of how that needs to work from an architectural standpoint and an orchestration standpoint, but it’s extremely complicated. It requires some data to be on-device to be cloned in the cloud and to be accessible both in the cloud and on-device so that there’s no lag between the prompt and the response. You can have natural language conversations with your agent and your AI assistant without having to wait for a response. Or, “Hey, I’m thinking about it. Give me a second.” What Samsung did though is something very different from that, which is super, super interesting. At first I thought, “Okay, this is the wrong approach.” Then the more I thought about it the more I realized, “Wait a minute, this is actually smart.” What Samsung did is they prioritized personalization, and data security, and data integrity and safety by essentially moving a lot of those processes, the training and the inference, to the device itself and blocking it from the cloud.

Essentially what they did is they have two parallel systems. Because they’re Android devices, there’s Gemini, which is the Android device-to-cloud AI platform. That remains untouched. Anything that you do with search, it’s going to go out through Gemini and you have this normal integration. But all of the super personal stuff that it’s learning about you, essentially the personality graph that an agentic AI, it needs to be able to build and train itself on in real-time based on your needs is super personal. Samsung made the decision to keep all of that on the device, secure, no contact with the cloud. Nobody else is going to train on your data, nobody’s collecting your data. Google is not collecting that private data. It’s on your device. Not only that, but it’s protected by their Knox Security solution. And it’s post-quantum encryption, which means that currently, at least in theory, I haven’t validated this, but based on what Samsung is telling us, it’s not decryptable with quantum computers, which is really nice.

Pros and cons of that. Pros, obviously data security, private, all your data remains private, and you have I think this added sense of privacy that you wouldn’t normally have with a lot of AI products and solutions. It’s tied directly to Samsung. The con is that if you lose your device, you have to start all over again with all of that agentic training. But device-to-device, when up upgrade from an S25 to the S26, or whatever the next generation is, you’ll be able to do that. You just can’t upload that private data to the cloud. But all of this is done because the chip that they’re using, the SOC, the system on chip, is capable of doing that. In this case, it happens to be Qualcomm’s Snapdragon 8 Elites, which is the flagship platform that Qualcomm introduced at their summit back in October of last year, so it’s the latest and greatest. But it is a custom chip made specifically for Samsung. It’s actually the Snapdragon 8 Elite for Galaxy, which has a few additional bells and whistles for some camera improvements, and also for this enhanced agentic AI on device capability.

But it illustrates I think the new paradigm of this distributed agentic AI, where sure, you can train a lot of models and do a lot of things in the cloud, and there are things that work best in the cloud and that should be operating that way, just as a cloud service. But there is also … Oh. All right, I’m going to continue. It paused for a second. That’ll be a nice place to cut. But there’s also a really huge leap forward in capabilities of these ARM-based chips that are on mobile phones, that are on PCs, that increasingly are showing up in smartwatches as well, in smart glasses, in essentially every digital product that we touch, there’s more and more capable of doing this. The size of the models that you can train and do inference workloads with on a PC, versus a mobile, versus smart glasses depends a little bit, first of all, on the system on chip itself, but also on the size and the form factor. You can put a lot more processing power in a PC than you can in a phone, and you can put a lot more processing in a phone than you can in smart glasses or on a smartwatch. But you also have to think about how all these devices work together. And how they can pool their processing resources so that, instead of processing something on the watch, you’re processing some of it on the watch, some of it on the phone, some of it on the PC.

A platform like Snapdragon, which is in all of these devices, might be able to just work all together more efficiently than cross-platform combinations to give a user an enhanced AI experience that is primarily on-device, or that can separate the needs of pushing some of the workloads to clouds services and then keeping some of them private. For a variety of reasons. For speed and efficiency, for power efficiency, for cost efficiency, but also for privacy. It has implications for consumers. You and me, just wanting to keep it private, and cheap, and not having to pay for all of this cloud inference stuff. But also, on the commercial side for businesses, because now the more data they can have in-house, the more secure their data might be. And the more processing they can do in-house, also the lower the costs. They don’t necessarily have to pay for as many instances of training workloads that they’re pushing out to a cloud through a cloud service, they can keep a lot of that stuff in-house.

Ron Westfall: Yeah. I think it’s good news for, say the health monitoring use case.

Olivier Blanchard: Yes.

Ron Westfall: It aligns. That’s I think one constant theme, when we saw with the launch of Project Stargate, Larry Ellison step up and say, “Hey, this AI investment has warranted because of advances that could be made in medical research, or tracking medical records, and so forth,” with that privacy respected, built in, and so forth. Likewise, just being able to have a device that somebody can have confidence, “It’s going to maintain my privacy,” but it also can just do that, detect issues before they get out of control. Just across the board, any health benefits out there. That I think is certainly the good news.

Olivier Blanchard: That was one of the main use cases. I think it might have been under-indexed a little bit at the announcement itself. But in the pre-briefings, we spent a lot of time on that particular use case. How Samsung specifically, but I think as a concept, having all of that data and that agentic AI processing and analyzing and recommendation engine on the devices as opposed to in the cloud, first of all is more immediate. But also, I think especially with medical issues, I think people want their data to be protected. We’ve all been burned so many times with hacks and our private data, especially medical data, getting out into the wrong hands or just getting out in public. This takes care of that. You have a personal assistant that is able to, through the use of different devices and sensors, whether it’s a Galaxy watch or a ring, or other sensors, essentially guide you and give you feedback on how well you’re sleeping for instance, your eating cycles, what your caloric intake is. We were talking about things also about a more granular approach to your diet, which you can enter manually. But also, the AI knows what you’re eating so it can extrapolate the types of nutrients that you’re getting and not getting. It’s creating and painting a much more, again, granular, and detailed, and complex picture of your overall health. It is intelligent enough to understand your cycles, understand your patterns, understand cause and effect of good and bad behaviors, make recommendations, monitor your health. It’s amazing because it’s all on the device and it’s all secured.

I love this, It moves some of I think the real, true immediate benefits of agentic AI out of the cloud in areas where it doesn’t need to be in the cloud. Some things need to be in the cloud, some things shouldn’t be. We now have the capability of parsing that according to our needs and according to our preferences. That changes the equation a little bit for how we think about AI investments. Again, it speaks to that diversification of focusing on cloud AI training and inference, but also focusing on these edge use cases that are becoming much more prevalent and with a huge potential for adoption. The footprint, if you look at the mobile industry, if you look at the PC industry, even if those numbers, essentially the install base, doesn’t really grow that much, there’s a refresh cycle here that’s going to bring a lot of these AI chips to this broad install base. The market doesn’t actually have to grow to show results. It’s the refresh cycle that’s going to show those results and push a lot of these AI chip numbers out. That’s what I’m hopeful about.

Ron Westfall: Yeah. I think that’s a great segue for who else can benefit from the potential or just the AI ecosystem overall growth. This ties directly to, well, 5G service providers, certainly the folks who provide mobile services to consumers and businesses, and specifically here in the US. What I think is interesting is that we’re seeing major US operators, such AT&T and Verizon, looking into, “Okay, how can we play a more integral role and monetize AI in terms of being closely linked to the services they provide?” It’s not exclusively mobile, but it certainly includes, for example their fiber services, business services. I think what’s interesting here is that they have the real estate that you touched on, Olivier. How do we push more AI capabilities, training and inferencing, closer to where the customer is? Bringing the AI to where the data is is certainly the mantra that we have heard a great deal about.

What’s interesting is that now we see that Verizon Business has revealed basically a bundle of products that are designed for enterprises, and also cloud providers as well as hyperscalers, to again deploy those AI workloads at scale by basically offering a single platform that is designed to do just that. What it is, it’s called Verizon AI Connect. It’s offering a blend of really the operator’s fiber infrastructure, along with its power, space, and cooling capabilities with those resources, using it to deliver it. But also, it’s backed by its virtualized 5G programmable network to really make the AI workloads more, I guess you’d say customized to the customer needs out there. What’s interesting is that already Google Cloud and Meta have already onboarded Meta platforms specifically. I think that’s important because these are going to be logically the early adopters of AI infrastructure platforms such as this, that are offered by a major service provider. And not to mention Verizon and Google are also looking at ways to advance AI services for things like network maintenance, as well as anomaly detection.

This is in play. This is something that I think will help the service providers, “Okay, finally, we have a way to use our real estate to take advantage of AI.” This is not going to be necessarily a repeat of things like mobile edge computing and so forth, but there’s still that risk. The operators still not be able to figure out, “How can we be integral to this?” But I think this is demonstrating that they’re off to a decent start. Not to be outdone, AT&T recently secured $850 million from the sale and lease-back of its under-used CO facilities from a property company Capital Reign as part of its copper retirement plan. This deal closed in January. It involves the asset transfer of 74 properties across the US spanning more than 13-million-square-feet.

The bottom line here is that the operators are becoming smarter about how they can take advantage of the real estate assets that they do have today, that is retiring CO assets as well as other edge infrastructure, and working with the major AI players out there, IE the hyperscalers, to basically come up with mutually beneficial ways to monetize AI in the near future, and certainly longer term. Olivier, what are your thoughts on this? Do you see the service providers really being able to step up and actually playing a meaningful role in this? Or this is something that, okay, it’s another missed opportunity and the service providers will be reduced to a commodity-like provider of the infrastructure for the AI services that are running across the clouds out there?

Olivier Blanchard: Maybe a little bit of both? Yeah. I think it’s smart for them to do it. The situation that we’re in is we’re expecting, obviously if we’re looking to spend half-a-trillion dollars on building data center infrastructure, it means that we’re looking for a lot more computes to come from somewhere. A lot of these builds are years down the road, we can’t really start yet. The expectation is that we’re going to need a lot more compute power very quickly. Where can we find it? If you have data centers already out there that are under-utilized, or a lot of that processing power can be re-tasked for higher priced or more premium services, it makes sense for a business to go for that ROI, for that opportunity. One, if a lot of those data center resources are under-utilized or not used at all, suddenly they can be assigned to it. If we can charge a little extra for this because AI is more valuable than being on a 5G network, maybe there’s value in that as well. I think though that you’re right. Ultimately, whether it’s successful or not, that is going to depend on them, and how they package it, and there are a lot of variables there.

I think at some points, those data centers and those resources age out. They’re no longer the most efficient, the most cost-efficient, and maybe they just retired. Or the Verizon’s and AT&Ts of the world upgrade their systems specifically for those AI workloads, and now we start seeing a transition of spend where they also become … They’re buying the Blackwell-powered racks and they’re becoming more AI-focused, and they become part of this ecosystem of AI workload training and inference services, and it’s all interwoven. Maybe that’s possible. But for right now at least, for the next few years, while we wait for all of these massive data center builds, they’re there with capacity. Absolutely they should go after that market and see what they can make of it. Worse case scenario, it doesn’t work at all. Middle best case scenario, they make some money, it becomes commoditized, and eventually it just dies out because they’re outperformed by other outfits. Best case scenario, they build a whole new business model that’s going to be really lucrative for them so it’s worth the shot.

Ron Westfall: Yes. I think this is, again, AI and it’s open-ended possibilities.

Olivier Blanchard: Yeah.

Ron Westfall: There’s just a myriad of variables here. On the one hand, that’s good news for those service providers. Certainly, test time scaling has introduced the possibility of okay, AI we do it can definitely become more commoditized, as you pointed out. Thus, that good news for the service providers. It’ll just become a lot less expensive to do the AI infrastructure hosting and so forth. Now the however is will this result in monetized outcomes for them? Yeah, again, one of the many … I would say the three doors that you presented are possibilities and it’s hard to bet right now on the operators because the entire AI ecosystem is basically going through a lot of flux as we speak. We’ll come back to this.

Olivier Blanchard: Yeah.

Ron Westfall: We’ll talk more about it.

Olivier Blanchard: Yeah. We should come back to this six months from now and see where we are.

Ron Westfall: Definitely. Or let alone, say after Mobile World Congress. A lot could change just from that. This has been great. Thank you so much, Olivier, for coming on board. Again, appreciate the opportunity for you to share your thoughts.

Olivier Blanchard: Yeah. Thanks for having me on. I am not going to Mobile World Congress as of now, this year. I’m skipping it. I’ll probably go next year. But I’ll be looking forward to the announcements surrounding MWC because I’m sure this very topic is going to be one of the major themes of the trade show.

Ron Westfall: That we can bet on.

Olivier Blanchard: Yeah.

Ron Westfall: That’s something I think we can all agree on. They might as well call it AI World Congress for the time being. Well, great, yes. I know we’ll certainly be sharing thoughts on Mobile World Congress. Beyond that, thank you, everyone, for joining The 5G Factor. Again, you can bookmark us on The Futurum Group website, as well as we can be viewed on Tech Strong TV. We certainly, again, appreciate taking the time to listen to our thoughts. With that, everybody have a great AI and test time scaling, let alone 5G day. Again, thank you all.

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.

Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

Research Director Olivier Blanchard covers edge semiconductors and intelligent AI-capable devices for Futurum. In addition to having co-authored several books about digital transformation and AI with Futurum Group CEO Daniel Newman, Blanchard brings considerable experience demystifying new and emerging technologies, advising clients on how best to future-proof their organizations, and helping maximize the positive impacts of technology disruption while mitigating their potentially negative effects. Follow his extended analysis on X and LinkedIn.

SHARE:

Latest Insights:

Novin Kaihani from Intel joins Six Five hosts to discuss the transformative impact of Intel vPro on IT strategies, backed by real-world examples and comprehensive research from Forrester Consulting.
Messaging Growth and Cost Discipline Drive Twilio’s Q4 FY 2024 Profitability Gains
Keith Kirkpatrick highlights Twilio’s Q4 FY 2024 performance driven by messaging growth, AI innovation, and strong profitability gains.
Strong Demand From Webscale and Enterprise Segments Positions Cisco for Continued AI-Driven Growth
Ron Westfall, Research Director at The Futurum Group, shares insights on Cisco’s Q2 FY 2025 results, focusing on AI infrastructure growth, Splunk’s impact on security, and innovations like AI PODs and HyperFabric driving future opportunities.
Major Partnership Sees Databricks Offered as a First-Party Data Service; Aims to Modernize SAP Data Access and Accelerate AI Adoption Through Business Data Cloud
Nick Patience, AI Practice Lead at The Futurum Group, examines the strategic partnership between SAP and Databricks that combines SAP's enterprise data assets with Databricks' data platform capabilities through SAP Business Data Cloud, marking a significant shift in enterprise data accessibility and AI innovation.

Thank you, we received your request, a member of our team will be in contact with you.