Search
Close this search box.

5G Factor: Making AI Open, Responsible, and Transparent

5G Factor: Making AI Open, Responsible, and Transparent

In this episode of The 5G Factor, our series that focuses on all things 5G, the IoT, and the 5G ecosystem as a whole, we review the major mobile ecosystem moves to make AI open, responsible, and transparent including Intel’s Enterprise AI proposition emphasizing open industry software for developer productivity, Qualcomm’s Responsible AI vision focus on privacy and security, and Nokia using Transparent AI across its audio portfolio development.

Our analytical review spotlighted:

Intel Enterprise AI Strategy Emphasizes Openness. At Intel Vision 2024 Intel sharpened its Enterprise AI proposition across critical sectors such as finance, manufacturing, and healthcare as they are rapidly seeking to broaden accessibility to AI and transitioning generative AI (GenAI) projects from experimental phases to full-scale implementation. We delve into why the Gaudi 3 AI accelerator launch underpins Intel’s vison by bringing together AI-dedicated compute engine, memory boost for LLM capacity requirements, efficient scaling for enterprise GenAI, open industry software for developer productivity, and Gaudi 3 PCIe capabilities as well as Bharti Airtel’s selection of Intel’s AI technology for enhancing its telecom data to improve customer experience and Infosys collaborating to bring Intel technologies to Infosys Topaz, an AI-first set of services, solutions and platforms that seek to accelerate business value using GenAI technologies.

Qualcomm Emphasizes Privacy and Security in Responsible AI Vision. Qualcomm, in a recent blog by Durga Malladi, SVP & GM, Technology Planning & Edge Solutions, detailed its vision for shaping the future of AI responsibly by spotlighting that one of the core principles of responsible AI is privacy and security. This is especially pertinent as AI systems collect and analyze vast amounts of data to protect individuals’ privacy rights and ensure the security of sensitive information. We assess why Qualcomm, by promoting transparency, is effectively targeting the fostering of trust that can enable users to make informed decisions about AI technologies, ensuring that they align with their values and expectations.

Nokia Attains Audio Portfolio Gains Through Transparent AI. Nokia’s applications of transparent AI in audio product development, such as OZO Audio and Immersive Voice, plays an integral role in providing mobile device owners with sounds of immersion, clarity and focus. This level of quality requires extensive exploration, analysis and testing at every step of the development process. Many of the smart features of these audio solutions, such as applying noise reduction, are developed by training algorithms. We delve into how Nokia’s research and engineering team running audio data through machine-learning (ML) models, can analyze the output and tweak the algorithms until the team gets the desired result for its product. This includes using Simple Linux Utility for Resource Management (SLURM). a cluster-management, resource-allocation and job-scheduling system that shows which user ran a specific job and with what resources, together with the container system Apptainer, allows Nokia to transfer its code transparently between various allocated resources in the company’s datacenter.

Watch The 5G Factor show here:

Listen to the audio here:

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Transcript:

Ron Westfall: Hello and welcome, everyone, to The 5G Factor. I’m Ron Westfall, Research Director here at The Futurum Group. I’m joined here today by my distinguished colleague, Olivier Blanchard, and he’s our Research Director focused on very important areas such as devices and semiconductors, including 5G naturally. Today, we will be focusing on major 5G ecosystem developments that have caught our eye and that will actually include, you guessed it, AI. We’re actually looking at a distinct aspect of it. It really is going to be a conversation about what is being characterized and being promoted as responsible AI, as well as transparent AI and the variations very close to that. And so, with that, Olivier, thank you so much for joining. How have things been coming along since our last episode together?

Olivier Blanchard: We’ve had eclipses, we’ve had monsoons, we’ve had earthquakes, but I’m good.

Ron Westfall: Yeah. We’ve lived to tell the tale. And so yes, welcome to the post-eclipse era. Hopefully, AI will play a major role here in making it just a more responsible and, say, environmentally friendly era as a result. And so, with that in mind, we recently participated in the Intel Vision 2024 events. Really, at that event, it was Intel really coming out and presenting its AI proposition on a portfolio-wide basis. In fact, on an ecosystem-wide basis, that is pretty much underlying. It’s bringing AI everywhere vision to basically all the players out there, including partners, customers, anybody who’s involved with the technical aspects of making AI work. And so, what I think was very interesting is that it focused on critical sectors that included finance, manufacturing, healthcare. But also, there was plenty of material related to the mobile ecosystem, especially when it came to generative AI capabilities, that is gen AI. Also, taking it from experimental phases to full scale implementation.

That really is the bottomline. It’s about going from proof of concept and bringing it to a product capable… Well, offering or a productization that is having an impact on the organization’s, say, finances and monetization. In fact, Intel shared, I think, a very important takeaway with their own research. Only 10% of organizations have taken their gen AI, basically, tire-kicking proof of concepts and so forth and have them in a production environment today. So we’re very much at the front end of this. This is something that is clearly going to be evolving dramatically in 2024 and beyond. But also, I think that’s important to note that we want to see more than 10%. In fact, some folks even thought, “Wow, 10% is a pretty progressive number.” But that aside, I think what is important is what Intel is offering. To spearhead it, Intel introduced the Gaudi 3 accelerator that is designed really to enable open community-based software and open standard or open industry standard ethernet to really enable AI systems to take off, to get past that 10% threshold and make gen AI capabilities more production ready and friendly.

And so, how is Intel doing that? Well, they’re basically offering an architecture that delivers improved gen AI performance and efficiency, especially in relation to existing implementations. As we know, that means NVIDIA’s AI capabilities, for example, the H100 GPUs. But also, H200 and on the horizon, the Blackwell offering. And so, this is going to be a very interesting contest. Because in addition to the fact that we know that there’s a supply shortage really when it comes to NVIDIA GPUs, we’re seeing Intel coming up and saying, “Hey, look, not only can we address this fundamental issue but we can come in with something that can be actually more rewarding from price performance and other aspects.” That is, again, the openness. Being able to leverage capabilities that align with ethernet networking. And so, with that in mind, it’s designed to allow activation of all the engines that are being used out there in parallel. That includes the Matrix Multiplication Engine or MME, Tensor Processor Cores, TPCs, as well as Network Interface Cards or quite simply, NICs.

We’re all familiar with these engines and how they fundamentally need to come together to allow gen AI to do its thing in an optimized fashion. And so, the key features include, first of all, an AI dedicated compute engine. And so, the Intel Gaudi 3 accelerator is purpose built just for that high performance, high efficiency gen AI compute. Second and next is a memory boost for LLM capacity requirements, specifically 128 gigabytes of HBME-2 memory capacity in combination with 3.7 terabytes of memory bandwidth, N96 megabytes of onboard static random access memory or SRAM. And why this is important is because we’re seeing today that it’s really the memory supply that’s wagging, or it is, in many key ways, wagging the prices of GPUs and how they are built and put together. And so, what Intel is proposing is a more efficient way to use memory and also quite simply a more cost-effective way to use memory. And so, this could be, I think, one of the key differentiators as to why Intel, when it comes to adopting Gaudi 3 AI accelerators in combination with Xeon CPUs.

Now, also, in addition, they’re looking at quite simply efficient scaling for specifically enterprise environments which also includes, and we’ll touch on this, operators as well as other mobile ecosystem players. And so, that’s an offering that has 200 gigabit ethernet ports that are basically integrated to every Gaudi 3 accelerator. Next up is the fact that, again, it’s the openness/ it’s an alternative to NVIDIA CUDA approach and that is something, I think, that is very keen out there. That’s not to say that supporting NVIDIA’s InfiniBand capabilities is not going to go away overnight. But I think including NVIDIA itself is seeing the writing on the wall, this is something that’s going to increasingly become an ethernet fabric that’s going to be required to enable this heavy lifting of, for example, gen AI and AI training as well as overall inferencing. So this is something that Intel, I believe, from the inception can take advantage of. It doesn’t have any Intelliband legacy issues to really address in terms of how to move forward in an optimal way. And so, this naturally includes integrating PyTorch framework and that’s as well as having it provide a optimized Hugging Face community-based model. And we find that this is really the most common AI framework out there for gen AI developers.

That’s going to be simply a key aspect here, is getting the developers on and making sure that they can have a good experience in terms of enabling gen AI capabilities using the Intel platform. Also, the Gaudi 3 PCIe, basically it’s the Peripheral Component Interconnect Express card that is really designed for lowering power. Obviously, that’s going to be another major aspect as we’ve seen current GPU cluster designs can take quite a lot of power to really operate. Yes, there are some improvements on the horizon. But here is a way to really, I think, deliver even more power efficiency within a reasonable timeframe that is within calendar 2024. Now, the direct mobile ecosystem impacts include the fact that Bharti Airtel is on board using Intel’s AI technology across the portfolio. And so really, it’s aimed at naturally leveraging their existing telecom data to enhance its overall AI capabilities to improve the experiences of its customers and partners. So this is, I think, a direct example of how a major operator is basically operating as a large enterprise and is already looking at ways to leverage Intel’s new capabilities.

In addition, Emphasis was also a prioritized as a new customer onboard. As we know, they provide digital services and consulting including the targeting of the mobile ecosystem as well as operators. They’re basically using Intel technologies that stem across the 4th and 5th gen Intel Xeon processors as well as the Gaudi 2 AI accelerator today as well as Core Ultra to support Emphasis Topaz. Which is an AI first set of services solutions and platforms that seek to accelerate business value using gen AI technologies. So yeah, that’s a lot of information and that’s actually a summarization of some of the key takeaways from one of the major announcements. And so, I’ll stop there because I know there’s a lot more to talk about here. Olivier, from your view, what Intel’s doing in terms of bringing AI everywhere and how it’s approaching it, its impact in the 5G ecosystem, what are some of your key takeaways?

Olivier Blanchard: Well, that was quite the thorough summary, I have to say. If that was the summary, that’s terrifying. No, no. It’s pretty great. Intel… Well, okay, two things. One, I just want to throw in there that we talk about NVIDIA and the H100, H200, and other devices or other products on the horizon. But I do want to give a shoutout to AMD because they have the MI300X solution which is also a pretty good equivalent somewhere between the H100 and the H200, depending on what you want to do. That’s also part of the ecosystem and that can and I think will fill some of the demand gaps that NVIDIA can’t fill. So I know that we just glance over AMD quite a bit but… They have a good product out there as well that’s getting traction. So that’s a little footnote that’s always in the back of my mind when we talk about NVIDIA like this… I know they have 90+% of the market right now but not necessarily forever.

The second thing though, more importantly, is I am… I do like the not necessarily use case specific design and thinking from pretty much every other silicon makers like, “How could we compete against NVIDIA or how can we compete alongside NVIDIA?” And so, this focus on specific verticals, on specific types of applications, on the range of performance and finding products that are going to be not necessarily overkill and not be underwhelming, but fit in that framework of, “This is the type of application. This is the kind of performance and bandwidth that we’re looking at. And so, we’re going to create products specifically to fit in that tier.” And just create products and solutions, like combined solutions, that aren’t necessarily premium to commodity but that are extremely well tuned to fit in these boxes and these blocks, depending on what you’re trying to do. So it could be the operators. it could be 5G network stuff. It could be inference work. It could be training on LLMs but maybe not like the super huge LLMs. It could be even the LMMs, the mixed models, not just the language models. So what Intel is doing, I think, is smart.

I’d like their… I don’t know that I’d call it a diversification strategy but definitely a more organic, more use case-focused approach to generative AI, to fabs even. Just what Intel is doing to get out of its rut of the last few years and sort of… I wouldn’t say jumpstart its revenue engine, its business engine, but definitely create more on-ramps to opportunities is really smart. This is a really good example of Intel doing that and its ability to execute. I think what people sometimes forget about Intel, even if they don’t necessarily have the number one, the best solution that everybody wants, is they’re extremely good at executing. They’re extremely good at finding those niches and those on-ramps to get back in the game when they’ve taken their eye off the ball for a couple of years which happens to pretty much every company. And so, this was exciting news in my view and it’s exciting news for the carriers as well.

Ron Westfall: Yeah. In fact, I think those were all very key topics that were at play at the event. It’s like we have to look at Intel overall. And we know, “Okay. When it comes to the financials, it’s a 90-day scorecard,” but that’s different from the long view, the strategic implications of building foundries here in the US. And so, that makes Intel unique in terms of how they are competing against the other players. That, I think, will prove a very fundamental difference further down the line to the point where these competitors will be turning to Intel for their foundry work. And I think that… It was an excellent point, Olivier, about Intel executing. Because we know that a few months back, Intel basically unveiled its “Bringing AI Everywhere Campaign.” Here at Intel Vision, it was, “Okay. This is how we’re executing. These are the details in terms of our portfolio, our partnerships, our overall strategy to really make enterprise AI not only a production-ready proposition but to fundamentally ensure positive business outcomes or certainly improving them.”

And so, it’s pretty much the first inning in terms of what is going on here, in terms of the competitive mix, and answering the question, “Why Intel?” And that certainly played into their emphasis on the openness aspect. I think that is going to resonate more and more. Now, clearly, for example, OpenVINO will play an integral role in enabling not just the developers but also other parties to support these capabilities. And that aligns with what we kicked off with responsible AI, transparent AI. You definitely need that openness built in as part of making sure that those types of initiatives and strategic objectives actually are going to be built in, organic as you put it. Or quite simply, make sure that these outcomes are going to minimize things like drifting and hallucinations. But also, making sure that the bad actors don’t flip this on the ecosystem. Something that the rest of the good guys, which I would count as us, would certainly not want to see.

And so with that in mind, let’s turn to another important, I guess, you can say coming out that was aligned with this. And that is in a recent blog, Durga Malladi, the Qualcomm SVP and GM for Technology Planning and Edge Solutions detailed how Qualcomm’s vision is for shaping the future of AI responsibly. Now, they’ve been certainly emphasizing their commitment to AI for years now and this is definitely part of their DNA and I think this is something that they’ve certainly been paying attention to. But I think it’s also helpful that with this blog, that it’s a sharpening of how Qualcomm is going to make sure responsible AI is going to be primal in terms of how AI will evolve, including, naturally, gen AI. And so, one of the core principles of responsible AI is privacy and security. I think that’s understood across the board. But it’s like, “Okay. How can this be quite simply implemented?” As AI systems collect and analyze these vast amounts of data, it’s going to be essential to protect individual’s privacy rights and also ensure the security of sensitive information.

And so, from our view, by promoting transparency, Qualcomm is targeting the fostering of trust and also enabling users to make informed decisions about AI technologies, ensuring that they align with their values and expectations. So we definitely need to basically dial back some of the wild west aspects here. It’s inevitable when we have these innovation bursts. But I think this is going to be quite simply essential in terms of making sure that AI delivers on an optimal basis, but also as promised by many of the key players out there. So what Qualcomm is doing in partnership with Truepic and with the Coalition for Content Provenance and Authenticity or C2PA… No, it’s not a Star Wars robot, but C2PA, is that they’re advancing the way, they verify the authenticity of, for example, photos. And so, we know that’s going to be very important when it comes to minimizing fakes and that’s certainly going to be a factor, for example, in the 2024 elections here in the US and in other parts of the world. So this is something that’s vital.

By using Snapdragon platforms, we see that technology can create a cryptographic seal around photos taken from a smartphone. And so, what this does is that the seal not only includes essential metadata like the date, the time, and location, but it can also verify if AI was used and the specific type that was employed. I think many people would fully appreciate that and thus prevent some of the games that are going on out there with deep fakes and so forth. Now, for example, if generative AI was used to manipulate the image, the digital seal can accurately detect it. Even during the transfer of the image to another device, the certificate remains intact ensuring the preservation of that integrity. And so, from my view, this collaboration exemplifies Qualcomm’s commitment to transparency and upholding the integrity of content to the era of gen AI. And so, with that, I know Olivier, you’ve been certainly looking at this in detail. What’s your view on Qualcomm coming out here and say, “Here are some key ways to make sure that AI is going to be just that, ethical, responsible, and transparent.”?

Olivier Blanchard: Right. Well, I think it’s the type of leadership that we just really need in the industry. And so, it works on two levels. One, it works on a level of Qualcomm being a responsible technology company. Not only providing the generative AI capabilities on their devices which we’ve talked about many times on this podcast. Where essentially, the whole on-device AI capabilities that we’re seeing. So the transition from generative AI just being on the cloud where you’re on your device and you’re doing your work through an app that actually works in the cloud and sends the information back. A lot of what Qualcomm has been working on is having generative AI work directly on the device. It doesn’t leave the device. It doesn’t go to the cloud. It’s all there for security reasons, for speed reasons. There’s almost no lag. So we’re going to see the arrival of devices, not just mobile phones, but also PCs and tablets that can run a lot of these generative AI apps on the device without any connection to the internet.

So it’s important for Qualcomm to take a stance there and to create these certificates, these modalities, these processes that capture, first of all, what type of manipulation has been done to an image or to a video. And also, translates it to users to be able to say, “Okay. Like this…” Whether you’re the New York Times or anybody else, being able to look at an image or a video and see exactly how it’s been manipulated. If it’s just been edited for contrast and lighting, it’s just a little bit of a color adjustment or if it’s actually been manipulated to change its intent or its content in a way that’s substantial as opposed to just aesthetic. The other way that it works, I think, is establishing Qualcomm as a leader in the generative AI space. So it goes beyond the ethics. It also goes to brand positioning and company positioning with regards to perhaps investors but also industry partners in the public at large which is something that I think is clever. It’s something that more companies should do. But we don’t necessarily think of Qualcomm as a leader in AI and generative AI capabilities or at least, enablements. We just think of them as the chip maker. We think of them as an IP company. We think of them as a component maker.

We don’t always think of them as the enablements company for generative AI on devices. This is a reminder with every single file that’s out there with a policy like this and with its application, every single day, for the next few years, is a constant reminder that Qualcomm is that company and that it plays in the space and they play such an important role in it. But the last thing I would say is just the caveat in all this, yes, if you’re transferring the image, the video, whatever the file is from one device to another, the certificate follows it. We still need a solution for screenshots because it’s still very easy to just take a screenshot of an image that will not have the certificate, that will not have all this metadata following it around, and still post it on a social network or somewhere else. And so, it’s still not a completely foolproof model. We still have to find other ways of verifying images and verifying their authenticity or at least their context.

If they’re not fully authentic, but they don’t necessarily detract from the story that they’re telling. But that’s something that’s beyond Qualcomm or any other company’s ability or purview. I think that might fall more on publications, on social networks, and social platforms. And perhaps, just the law in general. Oh, one last thing. There’s a difference between art and news and news commentary. And I think it’s important not to censor images that are manipulated but it’s important to create the right context. If I want to manipulate an image for my own artistic expression and use it for that purpose, I should be allowed to. There shouldn’t be a prohibition against that. But the moment that image is used to lie to people, to commit some kind of fraud, intellectual, political, financial or in any other way, then that’s when we get into some issues where the public, at least, needs to be notified that, “Hey, this… Exclamation mark, the image that you’re looking at has been manipulated and here’s how.”

Ron Westfall: No, those are all excellent points. And yes, it’s definitely an ecosystem dimension here. It’s not just Qualcomm. That’s why I think Durga’s blog is so valuable because it’s pointing out like, “Yeah. We’re not going to necessarily account for every single instant but it can be very important for peace of mind when it comes to, for example, a national security application or a mission critical or a public safety application. Because there are bad players out there. And so yes, we’re going to have parodies, we’re going to have manipulation.” It’s one thing if somebody’s doing it from their college dorm room and it’s on a topic that is comic. But when it comes to these other contexts, when it comes to these use cases, this is something that can quite simply provide that difference. And again, provide what can be characterized as peace of mind because there is that risk that if AI is falling into or being manipulated to such a degree, that it can end up producing Hindenburg-like scenarios. And obviously, that needs to be avoided. We don’t want this to end up like the Hindenburg, a dramatic crash of an entire mobile network because AI was being used the wrong way in the wrong hands.

And here’s something that can help transparency, that can help for very specific instances where it will prove a vital. So that was looking at images and that aspect of it. But there’s also another important aspect here that’s being impacted and that is how transparent AI can actually play a role in the application of audio product development. So now, we’re talking about the audio dimension here. What caught our attention was that Nokia recently presented its approach to this and how they’re building their products, basically using transparent AI. So products such as their OZO Audio and immersive voice capabilities are ensuring that mobile devices come with the sounds of immersion, clarity, and focus. Again, cutting down the potential confusion and manipulation. Now, this, of course, requires extensive exploration and also analysis and testing of every step of the development process. And so, as such, Nokia is using machine learning models to analyze the output and tweaks of the algorithm so that the Nokia team can get the desired results that it’s aiming for. Also, Nokia is using software tools, practices, and MLOps which I think has been a bit unsung but it’s certainly going to be integral to how AI itself evolves.

But it’s also understood as AI/ML when it comes to product development operations. So many other key aspects that aren’t necessarily gen AI only. Now, with that in mind, what is going to come out of that is just there’s more trustworthiness that can come from the data that’s used for developing not only their products. But also, how their audio capabilities are integrated and optimized. And so, as a result, Nokia is basically using a configuration management tool, Hydra that I think will gain more recognition here over the course of this year and beyond. But what it’s actually enabling is the use of PyTorch Lightning which is a framework that helps to speed up development work and basically unify the tools that are being used. As such, it can ensure that Nokia and whoever else uses these capabilities can reproduce its results every time. So we definitely need that consistency and reliability built in. And to basically tie off this introduction by me, there’s a new acronym I want to share with folks out there, it’s SLURM. Now, I know many folks have heard of it but not necessarily everybody and it’s Simple Linux Utility for Resource Management.

Now, why is SLURM important here? Well, it’s a tool that Nokia is using to make sure that what is being shown is that the user is running a specific job, which you touched on, Olivier, and with the resources that are required. As a result, Nokia is having SLURM work with its container system, AppTainer, to allow the transfer of its code in a transparent way between allocated resources across the data’s data centers. Now, this way, Nokia can transfer and manage its jobs without the challenges that have typically arisen when different versions of the same software are used. So we know there’s going to be a lot of software updating going on here. And thus, version alignment is not going to be possible every single time but that shouldn’t matter. It doesn’t matter which is being used, you have to be able to tap into something like SLURM to ensure that you get these better outputs, these improved business outcomes, and so forth. With that, Olivier, what are you seeing in terms of not only Nokia’s initiative here but what’s going on in terms of enabling audio optimization, smart audio, and so forth?

Olivier Blanchard: Yeah. So for starters, SLURM is just a wonderful acronym. I love it so much. So beyond that… No, it’s brilliant. Beyond that, no. I think the connection between AI, whether it’s generative AI or just assistance. And so, the application of more advanced AI and generative AI in our daily lives and audio is, I think, still understated and underappreciated. I feel that as we move forward to a world that’s much more AI-forward, where AI becomes much more ubiquitous, we can be basically in any room and just talk and prompt an AI assistance to help us with a query, with a search, with a process, with whatever it is. Voice is still the most natural interface. It’s not so much a keyboard. It’s not tapping a screen. It’s not gestures and pinching. All these things help and they’re going to continue to be part of that ecosystem. But I think that the natural voice interface back and forth between an AI who can respond to us in our natural language and that understands our natural language is critical moving forward. We’re starting to see it with smart glasses which are not technically… I mean, they’re still an XR product. They’re not augmented reality. They’re not mixed reality. They’re just smart glasses.

But the smart glasses that were put out by Meta in collaboration with Ray-Ban last year, which by the way uses Qualcomm’s own XR platform, is a really good example of what’s to come. Which is basically telling the camera, “Hey, take a picture. Start filming,” doing things like that. “Make a call. Call Ron,” and then automatically it will do that. And then, the relationship between the user and the device becomes even more important with regards to voice. Because now, you have to have smart speakers, that if you’re wearing glasses, direct the sound into your ears and optimizes the sound to be able to cancel out noises outside of your listening pleasure but focus the sound directly to your ears. So we’re seeing that technology move that way. We also need noise cancellation to enable microphones. Whether they’re microphones around a room or microphones in a device, whether it’s handheld or worn on your face. To be able to cancel out noise, whether it’s a crowd or wind if you’re in a car, for instance, or any kind of ambient sound to focus on the voice and be able to understand it better to process that language in order to be able to query the AI. And so, that relationship between voice interfaces, voice capabilities, or sounds and AI is really, really critical.

What’s interesting on the device level, especially on the software and silicon level, is that it’s a performance feedback loop where AI needs voice and sound in order to create more utility for users. But also, AI can help fine tune that audio and sound capability by using noise cancellation, by using voice enhancement and sound enhancement, choosing the right channels, and doing all of these very complex, intelligent, fine-tuning to get the best voice capture and the best sound feedback. So it’s pretty great. I think that up until now, I’ve noticed that sound has been the last thing on a lot of tech companies’ platform introductions. I’ve seen this with Qualcomm, I’ve seen it with others where it’s like, “Oh yeah, by the way, let’s talk about sound as well because we have all these cool stuff.” But I suspect that connection to AI is going to bring sound quality, sound intelligence, and the type of project and endeavor that you were talking about with Nokia. And bring that to the forefront of those experiences because it’s not just an afterthought anymore. It’s not just a, “Oh, and we have sound as well.” It’s critical on to the future of AI or AI interfaces rather.

Ron Westfall: Yeah. I think those are just spot on, Olivier, in terms of why audio is going to play such a major role in how gen AI as well as all the related applications can be successful because natural language processing. Yes, you can type it and so forth but there are definitely going to be contexts and scenarios where the audio aspect is going to be vital. That is talking into a prompt, being able to have prompts that are clearly understood, and also, the explainability. If it has to be done on an audio basis, it’s just doing that. It’s allowing the user or the groups of people involved to know what’s going on.

We already touched on context like mission-critical applications but also public safety. There’s just so many ways why this is actually going to be just a major piece of the puzzle and we anticipate that will become increasingly emphasized in terms of portfolio development. So it’s not like this, “Oh, by the way,” aspect. And so, that’s why that Nokia blog definitely leaps out at us. And so with that, my concluding thought is, “Long live SLURM.” But also, thank you, Olivier, for joining us today. I hope you have a great weekend and week ahead of you.

Olivier Blanchard: I hope so too. Thank you.

Ron Westfall: Naturally.

Olivier Blanchard: Same to you and same to everyone watching.

Ron Westfall: Naturally-

Olivier Blanchard: Or next weekend if you’re catching us next week. There’s still a weekend coming so you’re good.

Ron Westfall: Thumbs up.

Olivier Blanchard: Hang in there.

Ron Westfall: And yes, so to our viewing audience and listening audience, please keep in mind, The 5G Factor can be bookmarked, also reserved. That ability to get the heads-up when our next recording and our next interaction is being broadcast out there. And so, with that, everybody, have a wonderful 5G and GenAI day. Thank you.

Other Insights from The Futurum Group:

5G Factor: Key MWC24 Takeaways – Semis and Devices

5G Factor: Key MWC24 Takeaways – The Cloud and Telcos

5G Factor: AI RAN and Telco AI Rising

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.

Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.

SHARE:

Latest Insights:

Frank Geraci, President at Cronos, joins David Nicholson to share his insights on Huddle, a groundbreaking Smartsheet solution set to redefine configuration management, version control, and the use of Smartsheet portals.
Cicero, Director of Product Marketing at Smartsheet joins David Nicholson to share his insights on ENGAGE 2024. Discover the groundbreaking announcements and the unique energy that makes ENGAGE an unmissable event.
Jennifer Stockton and Courtney Finger share how Smartsheet transformed Conga's marketing operations from "chaos to collaboration," highlighting the pivotal role of Smartsheet in streamlining processes and enhancing creativity at scale.
Amilcar Alfaro, Sr. Director, Product Marketing at Smartsheet, joins Keith Townsend to share insights on the crucial updates from ENGAGE 2024, emphasizing the value of enterprise-grade scale and the platforms' user-friendliness.