On this episode of the Six Five Webcast – Infrastructure Matters, hosts Camberley Bates, Keith Townsend, and Dion Hinchcliffe share a conversation on the significant impacts of recent earnings reports, advancements in AI, and the challenges of data management in corporate infrastructure.
Their discussion covers:
- Lenovo’s Earnings: A deep dive into Lenovo’s strong performance in the PC and server/storage markets, highlighting a 20% revenue increase in their Infrastructure Solutions Group (ISG) and focusing on growth within the CSP and SMB markets.
- AI Advancements: The release of Evo2 by NVIDIA and Arc Institute, a groundbreaking AI model in biology and genomics capable of generating genomes, accelerating the pace of scientific discoveries.
- HPE’s Gen12 Servers: The launch of HPE’s 12th generation ProLiant servers designed for liquid cooling and energy efficient GPU-intensive workloads, addressing the needs of AI and high-performance computing environments.
- Cisco’s Growth: Exploration of Cisco’s revenue growth spurred by AI, including the demand for web-scale systems and the launch of their new AI Defense product.
- Data Management Challenges: A discussion on the complexities of managing data in an AI-driven world, touching on organization, access, and the quest for understanding in the midst of abundant data.
Watch the video below and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five Webcast – Infrastructure Matters is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Dion Hinchcliffe: Hello and welcome everyone. This is Infrastructure Matters, episode 72. And I’m joined by my partners in crime at Futurum, with Keith Townsend and Camberley Bates. And it’s been kind of an offbeat week for news. And so let’s dive right into it. Who wants to kick off with Lenovo earnings?
Camberley Bates: Sure, I will. I just went through their earnings, came through yesterday. They’re doing well. Both the PC business and the ISG, which is the server storage business, et cetera, has done well. I’ll focus on the ISG. Overall, they’re up 20% in their revenue, so they’re hitting all their numbers and expecting things to continue to go very well, which I think is positive for everybody. The ISG business is the area that has the server and the storage. They highlighted the growth with the CSP and the SMB market. They are up, year to year, 60% in their group, which is significant. It seems to be most of that is coming through their OEM business, where they’re customizing the server business for the CSPs.
They didn’t go much into the details about how that is going. They have a new SVP or EVP running that division, Ashley. And I am not going to try his name because it’s like this long. Ashley G, I should know how to pronounce it by now, but a long time ago he was with Dell, and then most recently he was with WDC, Western Digital, on their HDD side. And he joined them this last fall. So I think that they’re being quiet about what exactly is going on in their server business, et cetera, other than it is growing. It is breakeven, though. So they need to get to a profitable business. And so, the strategy is being developed, but they’re going to continue to develop the profitability on the OEM business. So all good news for our industry, continuing to grow and healthy. So that’s what I was looking for, is how healthy is this business?
Dion Hinchcliffe: Yeah, it’s interesting to see. So on my end, I saw NVIDIA and Arc Institute released a new AI model, called Evo2, and it’s a biology model. So we’re seeing a lot more directed AI models focused on areas like STEM. And AI models really good at certain things like coding, which was unexpected. And the Evo model is trained on 9.3 trillion DNA base pairs. And this allows you to ask almost any question that you could possibly imagine having to do with genetics or biology. But what’s interesting is that it can generate genomes. So it’s not just you can ask it about how does this organism work or how does that organism work? But it can technically create the entire sequence for new organisms, or how to repair a certain cell, or fix a certain disease. It’s incredibly powerful.
So it’s got a lot of attention because it’s the largest biology model yet created by a large margin. It’s effectively the ChatGPT of biology. And this is highlighting something we saw with AlphaFold. AlphaFold is Google’s model for coming up with new molecules, specifically new drug molecules. And that model has generated more novel treatments and novel drugs than we can currently test. It has, I believe, a 60% accuracy rate in creating a new molecule that will address a specific situation. And this highlights something that I think a lot of us didn’t expect with AI, which is these models they’re creating, they’re actually evolved in new knowledge discovery, scientific discovery. They’re creating new things, discovering new substances and new treatments faster than we can actually process them by a 100X. So Evo2 got a lot of attention in the science industry. It’s taken very seriously by people in biology and genomics. And it should be a breakthrough. But this is something we’re going to see more and more is the companies that have access and control of these models are the ones that will be creating the innovations. And now, our biggest challenge is how do we organize to take advantage of all of the scientific discoveries that will come out of these models. A pretty exciting time.
Camberley Bates: So this is a model that is not only a human biology, but it’s all organism?
Dion Hinchcliffe: Yes. It’s like ChatGPT. It’s got every scrap of text in every book and every magazine, every scientific research paper, and every web page ever created. This has been fed every genome that is known.
Keith Townsend: Yeah, it’s going to be interesting to see how this helps downstream with drug formulation. And more importantly, you hinted to this, Dion, how do we catch up to the technology from a regulation perspective? How do we test this stuff in the real world? There’s the theoretical in AI and these formulations. And then downstream, how do we literally make sure that these things do what they say and can get back that closed loop feedback to make the models even better?
Camberley Bates: That also gets into interesting, how do you do clinical trials then? Does clinical trials end up changing? Is there areas that we can take this and do the predictive modeling about what that looks like and what the side effects are? I think about the commercials that I have. You’ve got the commercial about what it’s going to do for you, and then the other half of the commercial is what it’s going to do to you. And so that makes it a very interesting-
Dion Hinchcliffe: Well, we’re not structured for it. We’re structured for an environment where we occasionally discover a novel new substance and we want to test it. And now we’re going to be overwhelmed by these types of things. We need to find a new way to curate and manage these types of… It’s amazing. The potential is incredible. It’s going to revolutionize human health and all sorts of other things. But we don’t know how to manage it and really take advantage of it. And what was interesting is that they announced three new important drugs that were developed with it, including a cure for a new type of leukemia that was created by the model itself. So this isn’t theoretical, it’s actually happening. And so it’s great. It’s wonderful to watch. We just don’t know how to manage technology that is so powerful like that.
Camberley Bates: And that gets into that discussion that we had at one time, talking about prompts, learning how to do prompts, which kind of sounds weird. But often, the questions that we ask, or sometimes it’s the question we forget to ask that are really super important to exploring ideas in the areas. And that goes back to some really deep critical thinking on our part as well.
Keith Townsend: And relating this back to the enterprise and enterprise IT, one of the conversations I’ve been having with practitioners, I’m looking forward to a podcast interview I’m doing with Brian Lau of AWS next week, is how do you adopt things like AI application development? If you’ve ever done AI code or coding with AI, it looks very different than human code. So how do you adopt the governance around using AI code, and how does that affect the downstream? Yes, you’re coding faster, but the output doesn’t look anything like the original output of code, and dealing with it, and code management. And it also impacts the human side of the equation of how do you get people to get consistent input, that prompting that you’re talking about, Camberley? How do we get that consistent input because the LLMs are not consistent in themselves, and we get a deterministic output? So there’s a lot to learn from this breakthrough on how do we use some of the same discipline to apply this to enterprise IT.
Camberley Bates: Yeah, absolutely.
Dion Hinchcliffe: Exactly. And so getting back to more traditional IT, HPE had some server news recently. Who wants to tackle that one?
Keith Townsend: Yeah, so if you follow server generation nomenclature, HPE is now in their 12th generation of their ProLiant server platform. We were stuck on Gen 10 for quite some time. The TikTok was Gen10, Gen10 Plus, Gen11 and Gen12. And what those previous models had in common was the reliance on Intel. So this is kind of a proxy for where the industry is going. We’re still waiting on the sixth generation of Intel Xeon, and the server manufacturers, since the previous generation of servers, are no longer waiting on Intel. This has all been driven by NVIDIA, the ability to liquid cool these systems. Gen12 is much more focused on liquid cooling versus Gen11, the support to cool these GPU resources, and getting ahead of not just competitors, but Intel itself.
Camberley Bates: So is this AMD as well?
Keith Townsend: AMD as well? They’re no longer the TikTok. X86 is no longer driving the product schedules of the two major server manufacturers, Dell and Intel. I mean, I’m sorry. Dell and HPE.
Camberley Bates: How do we tell the difference here? So we used to think it’s like 9, 10, 11. It was just the next generation of the Intel chips, and then some sort of design that’s around there. So what’s so different about when we go from 11 to 12? Are we going to-
Keith Townsend: So it’s all about the things we care about around GPUs, right? Power efficiency, how much power are the CPU components, the non-GPU components, using so that we can get the energy efficiency? HPE is touting up to 41% better energy performance per watt. So when you’re talking about the stinginess of an enterprise data center rack at around 15 kilowatts per rack, and these systems themselves can push up to 10 kilowatts, it matters.
Camberley Bates: So then usually when we’d go in, I mean my past life many, many years ago when I was working IBM or whatever, your pitch as a salesperson was going in and saying you could get 30% more for the same amount of money or 30% whatever, 30% more power for the same amount of money. What is the pitch for the Gen 12?
Keith Townsend: We’re going to go down to a special tech field day event in a few weeks to get a deep dive on it. But I would imagine the story is about the things that enterprise data center managers care about, which is how am I going to power and how am I going to cool these systems? The I’m not retrofitted for liquid cooling. How do I get liquid cooling into these systems so that I can manage the heat that’s coming out of these systems? I imagine that that’s going to be much of the sales pitch of, you can’t do AI, at least not big AI, with previous generation systems, at least not as efficiently as the Gen 12 next generation systems.
Camberley Bates: Well, I will be looking forward to that tech field day.
Dion Hinchcliffe: Yeah. Well, I’ve been working with ProLiant since, man, since the ’90s. Is that how long they’ve been out?
Keith Townsend: Well, ironically, I have a friend that’s doing a network assessment for a piping company not too far from you, Camberley. And he called me this week, he said, “Keith, you’ll never believe this. I just saw a Compaq ProLiant server in production. And this is the irony, running Windows 95.” So that was hilarious. I’m like, “This can’t be true. You have to send me a picture, otherwise it never happened.
Dion Hinchcliffe: The number of workloads still running on Windows 95 and XP, it just fell under 10%, but it’s still surprisingly high. Something has to break for IT to move a workload off of an existing environment. So it’s interesting. So Cisco had some earnings. Can someone take us away on that one?
Keith Townsend: Yeah, so the interesting thing, revenue up to $14 billion. That’s a 9% increase. And you think about a business the size of Cisco, this is unusual. So diving into the numbers, surprise, surprise, driven by AI. Cisco is seeing orders for web scale systems or web scalers go from 350 million to 700 million. That’s a proxy for AI. They’re seeing more demand for AI. One of those companies you wouldn’t expect to benefit as much from AI, Cisco has lagged in their server design and refresh for their UCS platform. But I talked to a Cisco engineer a few months ago, and he was telling me that they’re able to justify an entire network refresh by the savings and efficiency for GPU to GPU communication alone. So Cisco is absolutely benefiting from the carry on impact from AI and AI training.
Dion Hinchcliffe: Well, and they just released a major new product called AI Defense, which is designed specifically to say how do you systematically defend against all the challenges of AI that you might have across your entire environment. And so, it’s yet to be seen how much lift they’ll get from it, but they’re doing major product releases around AI that I think a lot of people were not necessarily expecting. So data management was another topic. We had unified our platform data systems. Camberley, want to take us through that?
Camberley Bates: I started thinking about this yesterday with the VAST announcement this week. VAST Data announced that they added block protocol to their platform. And VAST has been highly focused on the AI or data analysis kind of market that they’re going after, which has traditionally been file, and somewhat in the same area of object. There’s also been this noise about platform data systems, meaning your strategy should be for one platform for all. And I think many years ago we’d get briefings from vendors. They’d come in what we would call the God box. God box, that gave you file block and object. And we’d kind of go, “Thank you very much, but I’m going to set up this system for transaction processing. I’m going to set up this system to manage shared file systems.” And your selection of what you did with your data management system was based upon the application and not necessarily an operational efficiency, which is what a unified platform more or less gives you. Because you’ve got some common kind of technology, you don’t move the data around. So in reflecting of that, there’s been quite a bit of data that’s coming out of our CIO, CEO discussion in and around AI, is that one of the biggest problems that they have, and we’ve talked to this at Infinitum, which is data management. That is, the data problem is preventing them to get to AI.
Dion Hinchcliffe: Correct? Yes.
Camberley Bates: Platform, block file and object on the same platform is not going to fix the other problem. It’s a different problem. Data management has to do with either understanding what is in that bit or whatever. What is in that?
Dion Hinchcliffe: What data do you have and where is it? And that’s an amazingly hard problem in IT even today.
Keith Townsend: Yeah. So once we think about the context of this problem, Elon Musk was quoted as saying that we’ve run out of data to train AI. Well, you’ve run out of data that’s on the internet. Just looked up some rough numbers. And then these are the numbers that I’ve seen consistently. About 60% of the world’s data is stored in a public cloud, 40% in the private data center. What that tells us is that systems like VAST, and this is their story, right, Camberley? VAST doesn’t care if your data’s in the public cloud, doesn’t care if it’s in a private data center. They want it behind a VAST system. They want to be the front end to you accessing that data. And you need to be able to access that data in various ways, blocks, storage and file. But I think when we talked about this a little bit before the show, the problem isn’t necessarily access to the data. The problem is the organization of the data. And I don’t know if putting the data on a single system solves that problem of organizing your data to use with cloud as we’re advising end customers on how to adopt AI. It’s all about organizing the data versus making sure you access it in the same way.
Camberley Bates: And that’s knowing what’s in the data. It’s having a risk profile for your data. It’s knowing what data that you need to mask in order to move it into the information, into your data lake or however, that one storage place that you want to be able to access it to pull it through. So there’s pieces of it, some things that it does solve, having it in a platform. But there’s many, many things it does not solve. And there’s this discipline of what we used to call the master data management people that needs to be-
Dion Hinchcliffe: Well, and they’re still around. They haven’t gone anywhere. In fact, you can say their pay grade’s gone up a little bit with AI. But unifying block file and object is evidently useful because arguably it takes three silos and combines them. And the problem is cloud and SaaS sprawl is the biggest enemy of anything. All your data used to be in the data center, and the CIA used to know where all of the data is, every scrap of it. And now it’s in data centers all over the world through all of these SaaS applications and cloud platforms. Does either of you have a sense of how VAST is going to help address those sorts of things, because without that you still don’t really solve the problem?
Camberley Bates: Well, we’re mentioning VAST here, but it’s only because they announced that. We had many others that are kind of marching in the same direction. And I guess what I want to do as an analyst in this environment, talk about what does unified give you? What is mixing these things together? Here’s where the benefits are, but it’s not the be-all and end-all that the CEO is looking for in terms of solving the data management problem. It’s just a piece of it. It potentially solves it. There’s other methods through any of this too.
Keith Townsend: And I think one of the bookends of the announcement from VAST, and we’re picking on VAST because they made the announcement. But one of the bookends is that their broker, their event broker service. So Dion, coming back to your question of how does this help? The event broker is one of those things that hints to what enterprise data architects can do. If you adopt this idea that you want to receive an event, there’s some type of message bus of when data is accessed, when data is written, when a type of data is accessed, then there’s an event created. And if you are able to have a consistent architecture or data architecture or event architecture is when data is accessed in SaaS, when it’s accessed in the cloud, when it’s accessed on block or in object, that you get some type of event notification. Then that helps with the problem. Doesn’t solve the problem, but at least it gives you an architecture to help mitigate some of the challenges associated with knowing where your data is at, how it’s accessed and how it’s used.
Dion Hinchcliffe: We already mentioned AI. It seems like it’s obligatory for every episode of the show so far this year. It’s interesting, we’re starting to get data from 2024. So I do a lot of data-based analysis. The numbers often tell the story about what’s really happening on the ground in technology and IT. And a new study came out from the analyst firm, reporting that 47% of CIOs say that they’re reporting positive ROI on their generative AI efforts, which is a good number, because a lot of the initial technology pilots and prototypes don’t actually work out that well, especially with high-difficulty or high-complexity technologies.
So 47% of CIOs reporting positive ROI is a good thing. And that’s going to pretty much ensure that we’re going to see sustained investment on the IT side for gen.AI in the industry for the rest of the year. So that’s great news. But on the other side of the coin, it now looks like with inflation numbers, our Futurum Group CIO insights, survey data says that IT budgets are going to increase 5.5% on average this year. That’s all going to be eaten up by AI and inflation. That’s basically where we are. In fact, that’s not even going to cover AI and inflation. And so, in real terms, most IT departments are going to see a net cut in their IT budgets because of that.
Camberley Bates: Which is why we’re already seeing, in my conversations we’re already seeing the issue about how do I get more efficient in what I’m doing? How do I either consolidate, look at operational efficiencies, streamline whatever I’m doing where I think we’ll see more investment into AI being used for coding, which takes out staffing.
Keith Townsend: That’s right. Exactly right.
Camberley Bates: Right. You’ll see more investment in what operational efficiencies can I gain from doing some different types of technology? So I think we’re going back to that good, strong total cost of ownership analysis as we get into purchases that are happening, because they’re going to continue to cycle in terms of upgrade their systems. They need to. They can’t just continue to pour the money onto the AI side and ignore the core infrastructure areas.
Keith Townsend: And this is why, again, bringing the whole conversation back together. This is why understanding from adjacent industries, genomics, manufacturing, how these industries are using deterministic outcomes to drive their AI initiatives. IT has led in this actually, pre-AI, for a long time with the Security Operations Center and finding false positives, and filtering out those false positives to get to an outcome in which we’re using less human capital to disposition those false positives and find true security events. AI has improved that now that we’ve taken these LLMs and moved that human assisted filtering down to the input level and come out with more deterministic outcomes, we’re using less humans to detect and respond to security events. I see, ironically, a doubling down on, as you two seem to as well, doubling down on AI and automation to help alleviate many of these budget concerns. It’s also telling how the reaction to the Broadcom VMware effective price increase, the SKU may not have gone up, but customers are spending more on basically undifferentiated-
Dion Hinchcliffe: I mean it’s a price skyrocket. I mean, that’s very heavy.
Keith Townsend: It’s quite a bit more. So again, if you’re all in in VMware, you’re going to learn how to use the entire portfolio to reduce your costs. How does this private AI solution help you reduce costs, adopt AI in a more cost-efficient manner, get more benefit of your VMware investment, or do what companies like GEICO are doing in moving away from VMware into open source?
Dion Hinchcliffe: I did a case study of that on the CIO Pulse report. GEICO’s makeup, move away. So, very impressive to watch. But this brings up the whole AI ops discussion, which is increasingly, IT is going to operated, first-tier operations will be done by AI and not by humans. The unblinking gaze of the robots can manage things 24/7. It’s only when exceptions happen that you need to bring humans in. So that does seem to be a major focus in terms of reducing fixed costs and reducing overhead for IT.
We’re almost at time, so I want to do my last piece of news before we wrap the show. The Avalanche protocol, which is a blockchain that uses as a poster child for enterprise digital ledgers, enterprise Web3, which is not a topic that I find is particularly popular in IT and CIOs. But it’s important because things are actually happening. As an example, again, one of the reasons I talk about Avalanche is the California Department of Motor Vehicles last year moved all 42 million vehicle titles into Avalanche last year. That’s a big deal. We’re actually seeing people moving important records into blockchain based technology. And the advantage being that data can’t be tampered with, because you can’t modify data in a blockchain. You can only add data to it.
So records are considered safe and will last a long time. Since the data is replicated across thousands of different nodes, you can’t lose the information. There’s no single point of failure. The whole point is that there’s massive multiple redundancy. So Avalanche has got about 9 million addresses, half a billion tokens, and a market cap of about $10 billion. And they just announced the Halliday protocol, which is the whole Web3 subcultures, really likes pop culture references. So this is a ready player one reference to the creator of the Metaverse in that story.
But the Halliday protocol is the very first agentic AI announcement in the blockchain space. If you want agents to work with anything in the blockchain world, you have to create a smart contract normally. And even though that’s relatively easy to do, that’s still a higher burden than most people want to take on. You want agents to be able to work with something that they’ve essentially never seen before. You don’t want to write code for every type of transaction or every type of thing you want to do in a given blockchain. So the Halliday protocol allows you to use agentic AI to manipulate or interact with the Avalanche blockchain.
And I think it’ll be the beginning of a lot of announcements like that. I see most blockchains doing that so that you can use agents without writing smart contract code every time. And again, this is still early days, but the way that these blockchains are not designed, the total market cap of all blockchain technology is around 3 trillion, making it about the ninth-largest economy in the world. And I’ve talked to retirement network CIOs from federal and state retirement programs. They’re not tracking this stuff at all, other than they know it’s coming. They’re like, “We don’t care about this. We don’t even like it, but we know that it’s coming and so we’re having to watch it.” So it’s worth it. It’s going to be ultimately a big part of some new record-keeping infrastructure. We just don’t know how big yet.
Camberley Bates: So Dion’s going to make you listen to it even though you don’t want to hear it, guys, on Infrastructure Matters. We’re going to bring it up. Dang it.
Dion Hinchcliffe: And that brings us to the end of another great Infrastructure Matters. And we hope you got a bunch of useful knowledge out of this. And I think all three of us will see you next week.
Author Information
Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.
Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.
Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.
She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.
Keith Townsend is a technology management consultant with more than 20 years of related experience in designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and an MS in information technology from DePaul University. He is the President of the CTO Advisor, part of The Futurum Group.
Dion Hinchcliffe is a distinguished thought leader, IT expert, and enterprise architect, celebrated for his strategic advisory with Fortune 500 and Global 2000 companies. With over 25 years of experience, Dion works with the leadership teams of top enterprises, as well as leading tech companies, in bridging the gap between business and technology, focusing on enterprise AI, IT management, cloud computing, and digital business. He is a sought-after keynote speaker, industry analyst, and author, known for his insightful and in-depth contributions to digital strategy, IT topics, and digital transformation. Dion’s influence is particularly notable in the CIO community, where he engages actively with CIO roundtables and has been ranked numerous times as one of the top global influencers of Chief Information Officers. He also serves as an executive fellow at the SDA Bocconi Center for Digital Strategies.