Search
Close this search box.

AI Systems and Data Protection Market Development – Infrastructure Matters, Episode 21

AI Systems and Data Protection Market Development - Infrastructure Matters, Episode 21

In this episode of Infrastructure Matters, hosts Krista Macomber, Steven Dickens and Camberely Bates discuss AI systems and data protection market development, announcements, and insights from a number of recent vendor and industry events, including:

  • Microsoft’s two new custom silicon chips designed for AI and general-purpose workloads, announced at Microsoft Ignite.
  • Developments in the AI space, including considerations for workload placement and how AI-backed chatbots can streamline daily tasks for IT Operations.
  • Perspective from the SuperComputing ’23 show in Denver, CO, including storage vendor portfolio developments for HPC.
  • Updates on Veeam’s competitive position and strategic development coming out of its Analyst Summit in Seattle, WA.
  • A preview of AWS reInforce.

You can watch the video of our conversation below, and be sure to visit our YouTube Channel and subscribe so you don’t miss an episode.

Listen to the audio here:

Or grab the audio on your streaming platform of choice here:

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this webcast. The author does not hold any equity positions with any company mentioned in this webcast.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Transcript:

Krista Macomber: Hello everyone and welcome to episode 21 of Infrastructure Matters. I’m Krista Macomber and I am joined by both of my co-hosts this week, Camberley Bates and Steve Dickens. Camberley and Steve, thanks so much for joining today. It’s great to catch up with you guys.

Camberley Bates: Mutual.

Steven Dickens: 21 episodes. As I said off camera, I can’t believe we’ve been doing this for 21 episodes now. It’s crazy.

Krista Macomber: It feels like we just launched it a couple of weeks ago. And we’re here on Thanksgiving week already here in the US, so I really have no idea where the second half of this year has gone. I don’t know about both of you.

Steven Dickens: Yeah, agreed.

Krista Macomber: Any good plans for the holiday?

Steven Dickens: Both of my kids are back from college, so that’s cool. So we’re hanging out with them. My birthday celebrations are going to carry on into this week.

Krista Macomber: Perfect, perfect.

Camberley Bates: Birthdays are supposed to last for the month.

Krista Macomber: Yep, yep. All right, so I guess we can kind of jump into it. So Camberley and I were both on the road last week. We’ll get to that in a minute. But before we do, Steven, I know while we were traveling, I know last week you were covering some pretty exciting announcements coming out of the Microsoft Ignite event, particular kind of involving some of their silicon developments.

Steven Dickens: Yeah, so Microsoft had been the only hyper scale vendor that didn’t have their own custom silicon. So that was a bit of an obvious kind of miss. AWS has been there with Trainium and Inferentia. Google made some big announcements at Google Cloud Next, and it’s been in the TensorFlow processor TPU space for a while. So Microsoft came out with MAI. Maya I think it’s pronounced? And Azure Cobalt. I think not surprising. I mean it’s going to be really interesting to see some of this and one of our sister firms. Patrick Moorhead did a really good initial take. Pat’s one of the people I follow quite closely from a CPU perspective. There was some very generic statements in the announcements around relative performance to other vendors. We’re going to have to see that. We’re going to have to look at that in some detail and dig under where this is from a relative performance point of view. But it makes perfect sense for Microsoft to be doing this given how invested they are with Copilots across the portfolio and then also some of their other investments. I don’t want to mention OpenAI, but certainly where they’re with those guys. So I think it’s really interesting and kind of makes perfect sense that you would see Microsoft getting into custom silicon.

Camberley Bates: So let’s talk about that. You said it makes perfect sense for them to be doing this and they’re the last one of the bunch to get to this point. And Microsoft has been doing hardware things, the Surface and all that kind of stuff for quite some time. Why do you think they were taking so long to get here?

Steven Dickens: I mean, that’s the interesting piece for me. As I say, AWS has been there with Inferentia and Trainium. I think we’re on a, is it second or third generation of that technology now? Google made a whole bunch of announcements. Obviously fabbing chips takes a while. This is not something you kind of hit in five minutes and go, ah, let’s do some product management. So they’ve been doing this with TSMC on the AMD architecture. But I think I am surprised that they last to market given how invested they are. Whether this was just, they thought they could get there with GPUs and get there with NVIDIA first and fit into Azure. Obviously they’re all sort of positioning forward. There was no kind of acknowledgement of why they were last to market. I mean, I don’t think that’s going to be significant for how well they are considered going forward. And I mean, I think Microsoft’s such a juggernaut in the AI space. Azure is arguably number one or number two, depending on who you believe and which stats you see for AI workloads. I think they’re going to be picking up a disproportionate share. So I don’t think it’s going to hold them back that they were sort later to market, but it’s going to be interesting to watch how. I’m as I say, most interested on the relative performance. We start to see sort of benchmarks start to see some of that analysis from our team and from our peer group. That’s what I’m going to be most interested in. Because I think for this space, everybody’s going on H100s and A100s from NVIDIA. That’s kind of your premium tier. But there’s a whole raft of space below that where I see this playing a role. And then it’s going to be interesting to see how Microsoft plays this from a Copilot perspective. Does Copilot run on this infrastructure going forward would make sense?

Krista Macomber: Yeah, it would. And I think coming out of Ignite last week, it was pretty clear Microsoft is the Copilot company now. I know I had a research note I wrote up regarding the security Copilot and the integration that it’s doing with its CM XDR products for the incident and recovery side of the house. And I know there was a whole swath of announcements. So to that point, Steven, I think it really could make sense to piggyback the two.

Steven Dickens: Yeah, yeah. I know Mark Beccue from our team did a whole bunch of research notes. There was a lot of Copilot announcements, so maybe check on the website and look for those. But I think we’re going to hear more from Microsoft on this going forward. This was kind of the first of what I expect to be a number of updates from the team over there.

Camberley Bates: So given the effort on Copilot, what should the IT infrastructure or audience, Infrastructure Matters people be really thinking about? How does this impact them or does it impact them at all?

Steven Dickens: I mean, there’s a workload placement discussion. You’ve now got another choice, another architecture choice, another platform choice to think about. So maybe you had less choice a week ago and now you’ve got more choice. I think as teams, as business line of business teams come to you and say, “Where should we put this AI workload?” You’ve got on-prem. You’ve got a bunch of the on-prem vendors who are working with Nvidia and Intel, with Gaudi and various others. There’s that on-prem landscape, and that’s going to be the right answer for certain workloads. There’s going to be Nvidia GPUs in the cloud, which might be the right answer for some of the workloads. And then there’s going to be the hyper scale options from the various vendors that we’ve just talked about as option. So I think choice is good. If you are looking at Copilot deployments, maybe that Microsoft is thinking that you’re going to naturally go with their offerings, and that would make a lot of sense, I think. So from an infrastructure team, if it’s a Microsoft AI based project, I’d expect Cobalt and MAI to become the default for Microsoft-based AI projects going forward.

Camberley Bates: Got it.

Krista Macomber: Yep. I think the other angle is leveraging AI, thinking about Copilot, how it’s being positioned. Another takeaway to me seems to be kind of from an IT perspective, being able to do their job more efficiently and more effectively and kind of harness those tools. To again, do things like I know in the data protection space that I cover, it tends to be a lot about having more intelligent and effective backup schedules. And maybe using AI to uncover threats more quickly and respond more quickly as well. So I think that’s the other angle as well that I’m seeing for IT.

Camberley Bates: One of the things that can, and maybe we’ll get Russ on here eventually talk about all the tools or even Mark. There’s so much coming out and being blasted at the IT guys on this AI. And it’s very difficult to keep up with all the pieces that are happening. And unless you’re living in it day by day, we kind of blow past it because we’re watching stuff go by. That’s our job is watching the things go by. And so helping, this is why you should really be paying attention to this or where this is going to impact you coming up with this announcement. And to that end, I will mention that we’re doing some work. We just released the first of probably several papers that we’ll be doing with Dell and Broadcom on AI capabilities. This one is distributed learning that we just put out. We’re working with another company called Scaler AI on the testing work. Very, very, it’s complex stuff that’s going on. But the work that our feature labs team is doing is working on the infrastructure pieces of it and how that either benefits, improves learning processes. How to most efficiently put the pieces together. And particularly this environment was showing heterogeneous environment, being able to do distributed learning. So we’ll put a link into there and if you’re interested in it, download it and take a look at the paper,

Steven Dickens: I think that speaks to the workload placement point. I think people are now going to be thinking, “I’ve got this AI project, do I put it on-prem with a platform like Dell or HPE and watch my private AI deployment from a performance point of view?” So the work you’re doing there is fascinating to see that from a benchmarking and performance testing. But then they’re also going to be thinking, “Hey, maybe I’ve got a Microsoft 365 Copilot deployment. Where should I be putting that workload?” They’ve got various choices, so I don’t think it’s going to be everything looks like a nail and I’ve got a hammer, here’s one solution. I think to pick up on your point, as you look at an infrastructure team perspective, you’re going to have a lot of different requirements and all platforms are going to be valid.

Camberley Bates: Yeah, absolutely.

Krista Macomber: So that might actually be a good segue to touch a little bit on the Super Competing conference that was last week in Boulder. Camberley, I know you had an opportunity to pop up there for about a day. I know we had a couple of other folks that were there as well. So did you want to touch on that?

Camberley Bates: Well, I’ll touch on it. I actually didn’t make it to the show floor. I was meeting with people outside of the show floor. I think the show floor was closing down at that point in time on Thursday. But it was in Denver. And I would say one of the biggest upticks in my mind is the amount of people that were there, which is a tracker to see where we are. But it was well over 12,000 people from what I understand, which I think is almost double what we’ve seen before.

Steven Dickens: Is that show becoming the Supercomputing and AI show? Is that really why it’s doubled in size? Is it because it’s not just big high-performance computing, big meteorological or martini based financial modeling? Is it really because it’s now the AI show? Is that what’s?

Camberley Bates: It’s still Supercomputing. So whatever you want to call Supercomputing and AI which spreads both sides. Traditionally it has been, when I started attending it seven years ago, and it’s been going on for a long, long time, it’s always been your large research centers. Lawrence Livermore, JPL, Fermi, and then all the universities. Everything from Switzerland to Italy to Purdue to Texas showcasing all their work. And understand why the universities are there is because they are getting grants. They’re getting money from the private companies to do research on their big huge supercomputers. So they come from all over the world. And there’s two big shows. There’s one here and there’s another one over in Europe that’s done in June that’s looking at this, the high performance computing. So yes, traditionally this has been all high performance computing, the Cray’s of the world, et cetera. Now that we’re going into a world where this is becoming AI started moving, we started seeing that move into the world. The ML AI craze that started maybe five years ago, that started picking up. But that was still very, very complex and neural networks is still very complex. Yet frankly, that’s how Uber created themselves. It’s Google, how they created themselves. Facebook, all the stuff that they’re doing is big, big supercomputing. Now with the generative AI, some might say we’re democratizing this technology in certain ways. And so yes, that uptick there and you find everything from water cooled or chemically cooled servers. They’ve got copper running, all that kind of stuff to cooling systems that are cooling these very large super competing areas to all the data storage guys, the security guys, networking guys, et cetera that are there, plus all the students that show up.

Steven Dickens: I’m a water cooled server nerd. I’ve got to admit, I do like a bit of copper.

Camberley Bates: Said that just for you.

Steven Dickens: I got a chance to go around Lenovo’s Neptune lab when I was at their site. And if you haven’t had chance to take apart-

Camberley Bates: Very fun.

Steven Dickens: … a water cooled server, they are particularly cool. I’m a nerd. I’m a nerd.

Camberley Bates: There was a whole lot of announcements that came out of there. The one that I sat through beforehand was DDN’s Infinio, DDN’s Data Direct Networks. They’ve been around for quite some time. They’ve been well known in the HPC industry. They also bought a bunch of other companies recently in expanding their total market. But the Infinio one is a QLC box, so they’ve got exoscale I think it is. That’s really super high end on speed. This is the next level down. But it’s also got a price performance number that’s going to compete. We’ve seen a bunch of folks with QLC operations. Vast would be another one. Pure has got one, NetApp has got one. That’s the trending of where that’s going because of the cost and the capacity that they have. So that was one that we saw go out there. WDC, Western Digital Openflex 24, it’s an all NVME box, RDMA, all the latest and greatest technologies that we see to speed the process. And for that market, these open boxes like what would come out of Western Digital or those folks, very often, those are the boxes they’re buying to put their parallel file system of choice on, whichever one that is. Whether it’s Fanasis or IBM’s Storage Scale, I think is what it’s named, also known as JPFS. Which I think I’ll probably call it that for the rest of its life as opposed to anything else.

Steven Dickens: Somebody IBM storage marketing started between-

Camberley Bates: Well, I’m sorry. So my friends over at IBM that named the thing Storage, it’s not very descriptive. I’m going to get it confused with-

Steven Dickens: You’re just going to go with GPFS just to stick with it?

Camberley Bates: Yeah. That’s a callout from my friend O’Flaherty. There you go. Would you tell the guys to rename it GPFS because that’s what I’d like it named. Anyway.

Steven Dickens: General file system, it does what it says on the tin. I mean, why would anybody in naming want to get hold of that?

Camberley Bates: Anyway. So it’s all big stuff. We had a couple of our guys there. Unfortunately I didn’t make it this year even though it was in my backyard, but there we go.

Steven Dickens: Well, I think Daniel recorded a whole bunch of videos. I think Keith Hanton was there. I was about to say Keith Winton, but Keith wouldn’t have done a great job of covering high performance.

Camberley Bates: Mr. Townsend.

Steven Dickens: Yeah, Keith Hanton did a whole bunch of videos. So we’ll maybe put some links to those in the show notes so you can check out the rest of our team who were there.

Krista Macomber: For sure, for sure. So that might be a great opportunity to talk about the reason why Camberley wasn’t at most of Supercomputing was that we were also at the other event that was in the Seattle area last week, which was Veeam’s Annual Analyst Summit which they host every fall. And always a great event in terms of just they make sure all their executives are there. It’s always kind of jam packed agenda talking about the portfolio strategy. Excuse me, Danny Allen, their CTO, his session is always the one that runs about half an hour over because us analysts can’t let him get off of the stage or kind of take his seat again because we always have a million questions. We also got some great insight into the state of the business and also where Veeam is thinking about taking their go to market strategy and their messaging. And I’ll kind of just chime in with some overall takeaways. And Camberley, I’m sure you’ll have plenty to add as well. But my kind of post on LinkedIn was that I’ve been saying it for a few years now. And if it wasn’t true before, it certainly is now. This is not your niche VM backup company anymore. And really they haven’t been for a while. But it’s been really interesting to see the business evolve. So they talked about, I believe it was $1.5 billion in annual recurring revenue for the business with 50% net growth. They’re at 450,000 customers. But really kind of beyond the financials when we think about the workloads that Veeam is covering, so not only are they covering again kind of the VM side of things, but they’ve added certainly some of the key enterprise workloads. But also a lot of the more modern is a good way to describe our workloads. So they acquired Kasten a couple of years ago for the Kubernetes container front. They’ve been very active in terms of expanding their coverage of SAS and kind of infrastructure as a service workloads. So covering the workload spectrum, but also the newer development is actually in terms of how Veeam is deployed and procured by customers. So what I mean by that is they acquired a platform called Sirius that was by a development engine called CT4 out of Australia. I believe it was about a couple of months ago. So Veeam is officially moving into the first party backup as a service space. They’ve had through their cloud service providers, they’ve had the SaaS delivery option for a while now. But that was kind of a key highlight there. Again-

Steven Dickens: Krista, just-

Krista Macomber: Yeah? Go ahead.

Steven Dickens: Is that similar in the way that people should be thinking about something like Metallic Comvault?

Krista Macomber: Yes.

Steven Dickens: Because I know we’ve seen a lot of that sort of first person space evolve. I know you’ve been really close to Comvault and you went out and saw some of their stuff and they made some big announcements.

Krista Macomber: Yes.

Steven Dickens: Was it on the 9th I want to say?

Krista Macomber: Yeah, yeah, November the 9th. That sounds right.

Steven Dickens: November’s a blur so far. But I mean is that where you see the overall backup as a service space going?

Krista Macomber: Yeah, so what I would say is that we are seeing demand for backup as a service in general. So we filled up some research that we published earlier this year that said about 30% of the respondents in that study indicated that they’re using some sort of backup as a service option. So there’s definitely interest. And really what this is this is hosting the backup software in the public cloud as opposed to either the customer hosting the backup software on their own on-premises or going through a managed service or even one of these cloud service provider partners that we mentioned. And so we’ve seen a lot of the demand popping up around particular workloads, Microsoft 365 being a big one. And really that’s because from a customer perspective, it just tends to make a lot of sense. If I’m already subscribing in this case to the application in the public cloud, why don’t I just subscribe to my data protection, my backup software in the public cloud as well, and kind of streamline that procurement and ideally some of the operations as well?

Camberley Bates: The other reason that Chris said that is also picking up is because Microsoft and Salesforce has gone public and saying, “This is your responsibility, not us.”

Krista Macomber: Yep.

Camberley Bates: You have been warned. And that will generate some business that’s going on.

Krista Macomber: Yes.

Steven Dickens: So I have to back up my Google Gmail account? That’s on me?

Krista Macomber: Shocker.

Steven Dickens: Just the batteries don’t come and do it for me?

Camberley Bates: Google hasn’t come out and publicly said that yet. It’s only Microsoft and Salesforce. But you are being told that, yeah, you are responsible for this and to take care of it yourself. We should expect to see some upgrades from the big cloud providers on their backup software offering. And so this will continue to heat up this next year because it is a new space and it’s growing. Both 365, Salesforce are growing, but also the backup as a service space is growing. So there’s more awareness that I need to take. I need to take care of my precious little data points.

Krista Macomber: Yep. It’s taken time. I mean, I made this comment in conversation with Veeam last week. I’ve said it with Comvault and some other players in this space as well. That I feel like I’ve been having the conversation for two or three years now that your recycle bin in Microsoft 365 is not your backup strategy. And so it’s definitely taken a little time to take hold. But to Camberley’s point, I think getting that nudge from these vendors. And the other parallel that draw as well is I was out at AWS Reinforce earlier this year back in the June timeframe. It’s AWS their security conference. And a big theme on the keynote stage was the fact that it is still the customer’s responsibility to protect their data even though it’s being hosted in AWS. So they were very clear to talk about from an AWS perspective, what they’re doing to bring resiliency and security to their infrastructure. But also the fact that they’re trying to bring some tools to empower the customers to protect their data. So definitely. So Camberley, any other takeaways from you from the Veeam show? I know I kind of laid out some of what I saw, but I’d love to hear what you saw at the event last week.

Camberley Bates: Well, I’d echo that Veeam is well beyond the VMware backup company. And they’re working on building a new executive team in there in the last two years, pretty much across most of the board. And Danny’s still there. Danny is just an awesome person as you said. But we have Anand and John Jester and Mr. Jackson who’s the CMO, they’re all relatively new into the company and leading the company and going into that next level of space. That as they go and grow more into the enterprise, they do have the ability because they’ve covered so many of the functionalities that you need to have on any kind of enterprise data protection. I think that they’re filling in the gaps along with what seems to be some changes in willingness to look at other options that are showing that the enterprise gives them an opportunity to possibly really grow in that space. So expecting them to do so in the future. So kind of big stuff there.
But next week, Mr. Steve and I.

Krista Macomber: Our travels are not done.

Camberley Bates: Mr. Steve and I are in Vegas at the wonderful re:Invent, crazy.

Steven Dickens: I’m expecting maybe one or two announcements from AWS. I think they might have some things to launch. We maybe get to three announcements. We’re joking here because AWS seemed to launch 200 products at re:Invent. So as analysts it’s drinking from the fire hose as you get hit with announcement after announcement. So it’s going to be a busy week.

Camberley Bates: It’ll be an extremely busy week starting with on Sunday, us getting in there. The focus clearly with, it’s like is there any other topic people? Yeah, there are a couple of other topics, but the big focus being AI coming out of there. They are hammering down on that and we can see that from some of the stuff that they’ve put on our schedules to introduce us.

Steven Dickens: Maybe I should run the same competition that I ran for Google Cloud Next? That over under with one of the analysts was how long does it take them to say AI? How many times do they say AI?

Camberley Bates: No, you’re going to run out of numbers.

Steven Dickens: Well, Google managed to say it 50 times in 22 minutes, so maybe I should track to see whether AWS does more or less? But yeah, it’s going to be AI all the way next week I’m pretty sure.

Camberley Bates: The other thing I’ll be interested to hear is any kind of stories around hybrid multi-cloud environments and where that is all going. I think there’s been some of the noise about that and the noise this last year on cost to cloud. So that’s going to continue to be, we’re going to continue to see that needing to be tuned, taken care of. That’s what I’m anticipating. So I cannot believe we have gone for almost 30 minutes and I’m not sure we talked. Oh, me.

Steven Dickens: There’s a lot to talk about at this time of the year.

Camberley Bates: I know. There is too much.

Krista Macomber: There’s been a couple of things to catch up on for sure. Yes.

Camberley Bates: Definitely. Yeah.

Krista Macomber: All right, well Camberley and Steven, safe travels to Las Vegas. We look forward on our next one to hearing all about everything from the show. All these announcements from AWS. I know there’s a ton of partners that are there as well. It’s a very, very active show floor and I know we’ve got a ton of meetings set up. So we are looking forward to catching up on that on our next one. And until then, we’d want to thank everyone so much for joining us. Please do like and subscribe and all that good stuff so we don’t miss a future episode. And again, we look forward to catching up with you next time. Thanks so much.

Camberley Bates: Great. Thank you.

Krista Macomber: Thanks everyone.

Camberley Bates: Take care.

Author Information

With a focus on data security, protection, and management, Krista has a particular focus on how these strategies play out in multi-cloud environments. She brings approximately 15 years of experience providing research and advisory services and creating thought leadership content. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.

Prior to joining The Futurum Group, Krista led the data protection practice for Evaluator Group and the data center practice of analyst firm Technology Business Research. She also created articles, product analyses, and blogs on all things storage and data protection and management for analyst firm Storage Switzerland and led market intelligence initiatives for media company TechTarget.

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.

Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.

Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.

She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.

SHARE:

Latest Insights:

An Analytical Look at Lattice’s Q3 FY2024 Earnings, Strategic Cost Reductions, and the Company’s Focus on Long-Term Market Expansion
Bob Sutor, VP and Practice Lead of Emerging Technologies at The Futurum Group analyzes Lattice Semiconductor's Q3 2024 results, examining the company's strategic cost reductions, AI-PC partnerships, and leadership transition to drive long-term growth.
AMD Is Developing AI-Focused Infrastructure Solutions and Competitive AI PC Processors, Positioning Itself in the Enterprise and Personal Computing Markets
Olivier Blanchard, Research Director at The Futurum Group, analyzes AMD's Q3 2024 performance and AI advancements from the Advancing AI event, emphasizing AMD’s competitive push in data centers and AI PCs against Intel and Qualcomm.
Amazon’s Q3 FY2024 Earnings Driven by AI, Cloud Innovation, and Enhanced Retail Capabilities
Olivier Blanchard, Research Director at The Futurum Group, discusses Amazon’s Q3 2024 earnings, including the pivotal role of AI and cloud technology, AWS growth, and innovative AI shopping tools reshaping Amazon’s revenue and customer experience.
Bob Sutor, VP and Practice Lead for Emerging Technologies at The Futurum Group, summarizes his report on his talk at the Inside Quantum Technology Quantum+AI conference in New York City on October 29, 2024. The talk title was Quantum AI: A Quantum Computing Industry Perspective.