The rise of AI is revolutionizing industries, but can we harness the power of AI while minimizing its environmental impact?
Six Five Media host Daniel Newman met with two industry leaders: Monica Batchelder, Chief Sustainability Officer, and Kirk Bresniker, Chief Architect of Hewlett Packard Labs and a Hewlett Packard Enterprise Fellow, both from Hewlett Packard Enterprise to discuss this critical topic.
Tune in as they discuss ⤵️
- The sheer scale of AI’s energy consumption and the urgent need to mitigate its impact on the climate crisis
- Holistic strategies for reducing the carbon footprint of AI deployments, including innovative data center designs and optimized operations
- The vital role of future-proof initiatives that promote collaboration among governments, businesses, and society in pursuing sustainability.
- Cutting-edge cooling technologies that are dramatically boosting energy efficiency in data centers.
- The sustainability implications of edge computing, including both the challenges and opportunities related to energy usage.
Jumpstart your IT sustainability strategy – get this interactive workbook to guide you through the six key steps of your journey: https://www.hpe.com/psnow/doc/a00115687enw?jumpid=in_pdfviewer-psnow
Watch the video below at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five Media is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Daniel Newman: Hey, everyone. Welcome to this Six Five Media webcast production. I’m Daniel Newman, host today. I’m really excited to have everybody here. We are excited to be partnering with HPE to talk about powering AI responsibility and improving data center efficiency in the age of rising energy. This is a really important topic. Those that have been part of our Six Five community or part of the research and analysis that we do, you have probably heard us speak endlessly about the impact that AI is having on sustainability and energy. We know companies over the last few years have been pivoting their messaging to try to understand how to address these diametrically opposed forces or are they, as AI is asking for so much energy to achieve these opportunities that we see to create greater economic efficiency and productivity, but at the same time, we have set goals to our shareholders, stakeholders, and employees that we are going to prioritize the planet sustainability and finding more efficient ways to bring compute and of course to deliver more returns to shareholders. We’ve got to do both things at once. So, important. So, very excited to have HPE here. I’ve got Monica and Kirk, two leaders of the organization, both of which were with me just recently in Davos, despite not seeing them there. I’m really excited to have both of you join us today for this conversation. Monica, welcome.
Monica Batchelder: Great to be here.
Daniel Newman: Kirk, thanks so much. Good to see you. Great hair.
Kirk Bresniker: Happy to be here. Yeah, great, thanks.
Daniel Newman: So let’s start there. I teased a little bit that we were in Davos, and as you both know at the World Economic Forum over the last couple of years, one of the real key focuses besides just AI has been sustainability. Companies have been trying to juggle. Even before AI went parabolic and became the key conversation in many of our businesses, we were talking about how to create sustainability to lower carbon footprint, build more efficient products, build more efficient data centers. So, love to hear from you both. Monica, I’ll start with you. What were some of the key conversations and themes that you heard about during your time at the World Economic Forum in Davos?
Monica Batchelder: Yeah, so unsurprisingly, as you said, AI was the hot topic and so of course the question around sustainability came up. One thing I did notice though was that the conversation always started with energy. We need more energy, what’s the supply, the availability. While those conversations can grab headlines and they can really provoke interesting discussion with a lot of the stats and projections we’re seeing, I do think we’re missing the plot if we don’t first start the conversation with, “How do we optimize the infrastructure and the systems to be efficient?” and then tackle the energy equation. The other takeaway I think is at least in the rooms that I was in when it comes to the conversation around weighing environmental costs versus the potential benefits of AI, the overwhelming consensus of the people I spoke to was that it will result in a net positive in the medium term, say the next 5 to 10 years. So, I did leave with a bit of optimism that overall, the advancement that we’re making in AI and the underlying architecture and the potential thereof is going to be a net positive for society.
Daniel Newman: Absolutely. I had some time with your CEO, Antonio Neri. We actually had him on the show. So, you are following and leading the conversation that we had with him there. It was interesting because when I was speaking to CEOs, there was an incredible optimism. Of course, the Davos week was a big transition week here in the US. Independent of how people feel about the political landscape, you could sense from talking to business leaders between maybe some of the opportunities that were going to come from some of the potential business deregulation, there was a lot of enthusiasm there. I think there was a lot of excitement about, I call it AI ROI, meaning that going from this philosophical, we’re going to build AI and do everything with bots and agents to like, “Hey, this is how we’re going to measure it and make money from it.” Kirk, I know you also spent some time there at the forum. Curious, in addition to what Monica had to say, any real key takeaways or things that you left thinking about?
Kirk Bresniker: Yeah, so I think you hit on such an important point here, which is we’re now past the irrational exuberance. As guardedly optimistic as Monica is, I don’t want to bounce it off by being pessimistic. I’m an engineer. I want to be realistic. I think so much about the conversation and the difference between the last year at the AI house and this year at the AI house is people were really talking about the nitty-gritty. How do I reduce this to practice? It’s a fantastic demonstration, but I’m a Fortune 100 global company with team members, partners, communities all over the world who I need to be responsible for. How do I understand how to take this technology and apply it to the business process of a publicly created company where I need to convince, I need to bring along a shareholder, a board member, a team member, a partner and a community and my regulators? They don’t want just hope, they want proof. So, how do we really analyze all of these potential benefits and be fully clear-eyed, understanding how we account for the costs? I think one of the things that Monica said that was so important for us, and in many ways, it’s the constant conversation I had all last week, let’s talk about energy as that ability to do work, the ability to have people meet their aspirations and goals in life, whether it’s individuals or entire regions.
Energy is our way in which we measure our ability to succeed and understand how we are not only efficient, but also the old joke is if something’s worth doing, it’s worth doing well. But we also have to ask ourselves the inverse, which is if something isn’t worth doing, is it worth doing well? You can have the best triple lead platinum-certified, any piece of IT infrastructure. If we’re not doing something that actually is advancing our goals as a society, as a community, or as a company, then no matter how fantastic the technology is, it’s not fit for purpose. So much about the conversation we’re having, how do we determine that industry by industry, region by region, vertical by vertical? Fantastic technology and yeah, now I want to have the next piece of the conversation, which is what is it worth and what should I be investing? So I know that that benefit is really going to pay off and pay off not just a little bit, but really be what I’m banking on, what my company or my country is banking on is that really important return. All these technologies are always going to be a balance point and we want to do the work out loud, show our math, show that ability for us to predict what that benefit is really going to be.
Daniel Newman: Yeah, Kirk, I like that. I’ve been saying a lot over the past couple of years that we’ve entered this era where everything needs to be measurable, we want to be purposefully sustainable. There is always an exuberance when a new trend emerges and everyone’s like, “We’re going to go net negative in the next 10 minutes.” It’s like whoa. Then we actually looked into how it’s going to happen. This is actually quite hard, to your point, the engineering problem, the reality of it all. So, we have to figure out how we can do both. Monica, you’ve heard me now say it. There are these two forces between data center and the growth and the investments in GPUs and the amount of power that they’re going to consume. We’ve heard about certain parts of grids being overridden potentially even in the near future as months and years out. We’re hearing about SNRs, nuclear reactors being built. Fusion was something we heard a lot about in Davos. It’s a problem that could help but has a long way to go. How are you talking to your customers and even within HPE about how to deal with these forces of standing up AI and the emerging technologies that are very power hungry and at the same time staying within your goals for managing energy consumption?
Monica Batchelder: Yeah. So, I think in our conversations with customers, we’re seeing one of two mistakes. So, either AI is being applied for everything everywhere rather than really discerning where it’s the right tool. So, there’s an analogy that using AI for simple applications is as efficient as taking a race car to the grocery store, and that seems to resonate with people. Then the second thing is people are optimizing at the device level rather than looking at the whole solutions and trade-offs that might fall within a system. So, for a long time, I think we’ve been designing technology with this understanding that power was unlimited. As long as you had money, you could buy more of it, you could use more of it. What AI is really teaching us or making us think about for a variety of reasons is actually like let’s design and deploy solutions that don’t assume those two things on the front end.
So, yeah, we really try to look at the whole system. We talk to our customers around resource efficiency, around equipment efficiency in the data center, obviously energy efficiency, which has long been a metric we look at. But then also with AI, new things like software efficiency and data efficiency. So, what I mean by that is when can pre-built and pre-trained models be used? How do we ensure that code is as efficient and simple as possible, or when do we decide where small language models are more appropriate or highly efficient large language models maybe? So we really looking at the trade-offs back to our conversation on measurement and understanding what those true costs are across the entire system to be able to optimize for it.
Daniel Newman: Yeah, I really like that. Over the last week, we’ve had some fairly disruptive news in the marketplace. We heard about China’s very small research group putting out DeepSeek. This effort has shown high double-digit levels of efficiencies. Yes, I have to say it, there’s some debate about the exacts, the hardware, software. We know that it’s not always full disclosure and we know that it’s a spectrum of open source, meaning not everything was necessarily open source, but we did hear a little bit about the power of open source. By the way, we’ve been working towards more efficient mechanisms for scaling laws. We’ve been doing it through software. We’ve been doing it through data science and algorithms. We’ve been doing it through hardware, through networking. There’s lots of different ways that we’re trying to address scaling laws. Obviously, timescale is another one that is very exciting. Kirk, we know that AI can be a positive for optimizing data centers. As you’re thinking about all this, everything I said and everything else, how can companies that are really training and deploying and scaling systems, how can they prioritize energy efficiency? What are you seeing out there? Because at some point, AI has to help us lower the power consumption. It can’t just be an endless grab of additional energy resources.
Kirk Bresniker: Part of that challenge is we would love to be comfortable that every jewel we’re placing into these systems will net out positive. Otherwise, we run the risk of finding in some number of years and maybe it won’t be our generation or maybe the next one who will be thinking, “What were they thinking? Why did they embark on this when it turns out it’s using more than it saved?” That’s certainly a risk. Although again, I think we’re cautiously optimistic when we think about complex problems and whether it is running a power grid or a national economy, that ability for these tools to pull together and gain that insight in a time that matters and to turn those insights into the superhuman ability to optimize and run these complex systems in a way that will seem like we are able to predict the future. Because what we’ll really be doing is doing analysis fast enough that we can do a dozen, a hundred, a million variations on the theme and still be able to turn that insight into action in a time that matters. So, I think that underlying enthusiasm and optimism is, I think, well-founded, but we still need to reduce it to practice.
There’s so much, when we think about organization, I’m thinking about our own engineering organization and in many ways it takes us to retrain ourselves. Even in this conversation, we’ve been interchangeably using the words power and energy and they are actually very different things. Power is energy delivered over time and so much about what we measure at data centers today, we think of the power utilization efficiency. How much of the energy per second actually ends up where we want it to be, as opposed to talking about energy efficiency? How much energy did it take to accomplish a task? Can it be lowered? Can I do more with this precious energy instead of doing less? The innovations we’re seeing and really it’ll be interesting to see by the time this comes out, will there have been another breakthrough? Certainly, it’s the possibility given how new this field is that will be subject to ultra short-term nostalgia. Remember last week and we’ll either look fondly or we’ll chuckle to ourselves. What were we thinking putting all that energy into solving the problem that way when now this week suddenly there’s a brand new approach?
So just be prepared. I guess that’s the underlying message for your question about what should organizations do? I think what they need to do is they need to embrace. They need to do exactly, as Monica described, don’t use a large language model to do arithmetic. Your solar power desk calculator will do a much better job at that. But everything else that you’re doing, and there isn’t an exception here, if it’s knowledge work, you should be thinking about how to utilize these technologies. If it is optimization work, you should be thinking about using this technology. So, begin the experiment, but also, we are cursed to live in interesting times. We will continue to have amazing breakthroughs and unfortunately they are just that. They’re breakthroughs. We don’t have what we’ve had before, which is that really nice Moore’s law of predictability. Oh, 18 months, I can tell you exactly how much better things are going to be in the world. We’re not like that. We are in tectonic shift time. I grew up in the Bay Area. It’s earthquake country.
You’re in Northern California. Every chance, every day was going to be very different than the day before because of a tectonic shift. That’s where we are. So, just being prepared. We learned how to engineer, to live and live productive healthy lives inside of earthquake country. We can learn how to live in this time of amazing AI breakthroughs, but it means that we need to think about how these technologies can be affecting us and be prepared. You need to have your go box. So, when this technology suddenly is available to you and your competition is driving on it, you need to be prepared. You need to be ready to go. That means you don’t think about these things afterwards. You think of them ahead of time.
Daniel Newman: Yeah, I think that’s the right mindset. I would say the US, for instance, has been under prepared for the amount of energy that would be required. Of course, the architectures we’re building are quite power hungry. I’m just playing on your energy and power descriptor comment. They are definitely not the same thing, but there is a very, very significant amount of interdependence. Then of course, architectures can deem and draw power differently. Of course, silicon can make a big difference there too. Lots going on. By the way, cooling can be a very important enabler of scale and data centers energy. Of course, if you all out there have ever been in a large data center, the noise, the fans, the sounds, and the heat are impressive. Monica, cooling is something that has been its own innovation path for companies like Hewlett Packard Enterprise, companies that want continue to scale and need to address cooling. Tell us a little bit about how you and HPE are thinking about cooling technologies and innovations that can impact energy efficiency and help companies be more sustainable.
Monica Batchelder: Yeah, I mean, building off what you said, we hear a lot about the potential of AI to transform other industries, but there’s also an opportunity to transform the underlying architecture infrastructure itself. So, that infrastructure has been the same for decades. Historically, AI and compute ran just fine in air-cooled environments, but fewer and fewer GPUs can run on that today. So, HPE’s adopted direct liquid cooling for its high performance computing systems. One of the benefits of doing that was really the energy efficiency. So, if I can just give you one example, one of the largest supercomputers ever built, it may be the largest supercomputer at this time, is at the Lawrence Livermore National Lab. That’s a really efficient system that uses direct liquid cooling. That system has 27 times the performance of the previous system with 40% less energy consumption. Then of course, that supercomputer is also being applied by the national lab to research a range of the breakthrough technologies that you two were just talking about, clean energy technologies like fusion and all the big buzzwords and breakthroughs that we’re waiting for right now.
Daniel Newman: Yeah, I’m waiting for that quantum supremacy to entangle itself with the AI clusters, Kirk, so that we can have LLMs running on one-one millionth of the power or whatever it is. I’ve read some papers. This stuff is fascinating. By the way for everybody, I’m having a little fun there. That was not precise, nor was it accurate. Something that HPE has been really ahead of the curve on has been the edge. The intelligent edge has been something that the company’s focused on for a long time. Some of the things we’re hearing more and more when it comes to sustainability for instance is running models locally on devices for instance. Of course, there’s also data transport because you look at what the edge can be and it could be a car. It could be a building. It could be a smaller data center that talks to a bigger data center. It can be a handset. There’s just lots of things. But you’ve been very focused on both industrial edge and of course providing the infrastructure for network edges. How do you see this as an enabler? Because I would imagine that if we can stop the constant demand on the network transport moving back and forth from the cloud, it could be really significant. Kirk, are you calculating what’s going on here?
Kirk Bresniker: Absolutely. As a matter of fact, back to geek out a little bit here, when we talk about data transport in the data center, the unit we use is picojoules, picojoules per bit. How many picojoules to get data from one side of the chip to the other, from one side of the data center to the other? We’re measuring tens of picojoules. When we talk about moving data over the wide area network, over that internet backbone, we don’t talk about picojoules. We talk about millijoules. For those of you who don’t remember your SI prefix constants, what we’re talking about is 100 million times more energy to go from edge to cloud as to when I’m inside the data center and then moving the data around. So, that first step is a doozy. When we think about everything we’ve been thinking about AI right now, it’s all about training. It’s about, “How do I get the data, that confluence of information, energy and infrastructure to create this model?” But to your point, Daniel, when does the enterprise get a return? You don’t get a return by training a model. You get a return on investment by utilizing the model, by inferencing over the model.
Now we talk about the energy consequence of data movement. Is it better for me to have a billion dumb cameras streaming back to a data center or is it better for me to push a billion copies of one model out to each one of those cameras and only send back conclusions, only send back aha, hey, did you notice, I think you should take care of this? That’s that really complex set of calculations of the energy consequence of data movement, data transformation, and data storage. For us, that intelligent edge is hopefully exactly that. Either because of the physics, the economics or the law or the environmental conditions, I would love to put the insight of an entire team of scientists onto the abyssal plane in the mid-ocean ridge where humans can’t go or up in deep space or in the hot zone of a viral outbreak.
Now we’re talking about this ability to host models at the edge, super low latency so they can make decisions in nanoseconds, super low energy so we can actually afford to project their insight and have them be in environmentally hazardous conditions. That’s what the intelligent edge promises us. When we move from this first period of investing in infrastructure to train to what we hope will be billions of people, hundreds of thousands of times per hour inferencing on their behalf, that again, we can structure the intelligent edge to partner with those data centers and those dense energy systems to net out that positive. So, we’ll have the societal outcomes that could only be achieved with AI and actually afford the energy to make it available to everyone.
Daniel Newman: Yeah, that’s a great way to pull that all together. I also think one of the things that’s very important too is there’s a lot of models that aren’t just language. I think as we’ve hyper-focused on language, as we added into this age agentic era, an autonomous era, physical AI era, we heard and saw, I know HPE is a very close partner to NVIDIA, the advancements they’re making in physical AI. When we start to train robots to be able to walk among us, the dexterity and the mobility and the interaction is very different in terms of how they need to be able to be trained and function and with what data versus say, scraping the open internet to do a language model or to do a derivative of a model. We’ve seen a bit of the commoditization of language models and now it’s all about how do you fork them, rag them, tune them, and of course, implement with your own data to give them a higher dexterity and value that’s based on not just the same data that everyone else have. We’ve got about a couple minutes left here and I’d like to give you both a rapid fire answering opportunity to final question and that’s let’s go out into the future. Kirk, I’ll start with you and I’ll let Monica take us home. What are some of the developments in sustainable IT that you think should be on everyone’s radar?
Kirk Bresniker: Certainly, I think something you briefly touched on there, understanding how novel compute, whether it’s quantum, neuromorphic, or any of the advanced physics-based accelerators, these are going to allow us to do what today we have to be doing in the data center. Maybe it’ll happen in the head of a drone, maybe it’ll happen there at a deep edge IoT device, but that our ability to project our human thought processes, our human expertise into inhospitable and remote environments, and then again, take that reasoning capability and have it happen in a time that matters. Because if I can actually have that living digital twin, I can make fantastic decisions in the moment out in the world. For me, that’s the most encouraging thing. That systems that cannot just be working off of historical data, not only can have instantaneous situational awareness, but can project forward, make the best decisions on our behalf, and then turn those decisions into action at scale. I think that’s the really exciting thing for me looking forward just a few years.
Monica Batchelder: Yeah, I think I’ll ground us back in reality to how Kirk opened this, which is we often get entranced by these big shiny breakthroughs that keep coming up and they grab headlines and there’s a lot of work being done today to drive those research and development, but there’s also a lot of things that AI can do today to create new levels of efficiency. We’ve seen it accelerate battery technologies. We’ve seen it optimize AI. It can help with supply and demand and load shifting. So, I think we need to stop talking about the potential of AI as if it’s a far-off technology that only has hypothetical capabilities and start harnessing it, obviously where it is appropriate and where the costs meet the ROI, but recognizing that we’ve just only begun to unleash some of the capabilities we already have today.
Daniel Newman: Monica and Kirk, I mean, look, I appreciate that after more than four decades of algorithms being in existence and machine learning having a multi-decade history and big data, which brought advanced analytics, which brought all kinds of practical AI that has been long available. I love that the headlines, media marketing, and breakthroughs finally bring those technologies… Kirk, I’m sure you really appreciate this little rant. … into the light so that people can appreciate things that we’ve actually been able to do for quite a long time. But in all seriousness, I think everyone on this webcast really knows that scale brings a lot of new challenges to any business. I think the role of leading sustainability or being a fellow at HPE, both of you, that in terms of practicality of making sure that the commitments the company’s made to its shareholders and to its communities and to its markets are able to be upheld while at the same time living your ethos of continuing to innovate and disrupt and bring innovation to the world.
So, I want to thank both of you so much for being part of this Six Five Media webcast. Thank you, HPE, for the partnership with Six Five and the community. I hope all of you out there enjoy this conversation, subscribe, be part of the Six Five community. But for this episode, for this webcast, I have to say goodbye. We’ll see you all later.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.