Search
Close this search box.

Enabling Companies to Build AI with HP’s Advanced Computing Solutions – Six Five On The Road

Enabling Companies to Build AI with HP's Advanced Computing Solutions - Six Five On The Road

On this episode of our Six Five On The Road series at HP Imagine 2024, HP’s Jim Nottingham, Senior Vice President and Division President Advanced Compute Solutions, joins Daniel Newman and Patrick Moorhead to talk about empowering AI developers and data scientists through innovative computing solutions by HP.

Their conversation dives into:

  • The strategic role of the Z by HP creation center in meeting the growing AI development demands.
  • Key advantages of Z by HP Boost for data scientists and AI developers in harnessing enterprise GPUs.
  • How Z by HP Boost promotes GPU resource efficiency and offers an alternative to costly cloud resources.
  • Enhancements introduced by the Z by HP Gen AI Lab to elevate AI creation and improve development impact.

Learn more at HP.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. We may talk about publicly traded companies and their share prices, but please do not take this as investment advice. We are not investment advisors.

Transcript:

Patrick Moorhead: The Six Five is On the Road here in the HP Garage in Palo Alto. This is where Silicon Valley started. You hear about garage innovation starting in the HP Garage, and it is also HP Imagine time. Dan, we’ve seen announcements on PCs, we’ve seen announcements on workstations, consumer, commercial, print, services, and a lot of AI wrapping it up together.

Daniel Newman: Yeah, we knew that AI would be a big trend. You can’t really attend a tech conference, really any conference these days and not talk a little bit about that, but I just have to say there was something inspirational just walking up to this place. I mean, we know it’s a historic place here in Palo Alto, and we’re sitting in the middle of greatness right here. We’re sitting in the middle of a company that’s a hundred years almost of innovation and disruption, and here we are again. Another moment of disruption is right in front of us.

Patrick Moorhead: Yeah, one of the areas that I think gets discussed a lot, but to me it’s never enough, is the role of AI developers, whether that’s front-end developers, back-end AI developers, and the tools and the services they use to crank out literally this amazing stuff. None of this would happen without developers, and there were some announcements here today. In fact, enhanced solutions, upgraded even better solutions from more capabilities from APC here and is our pleasure to welcome Jim back to The Six Five. Jim, you are a busy guy. Big announcements, say APC. Big announcements at Imagine. Thanks for coming on the show.

Jim Nottingham: Thanks for having me. I love talking with you guys.

Daniel Newman: You’re a well-dressed guy in a blue blazer. Love it. Are we too well-dressed for tech these days?

Patrick Moorhead: I don’t know.

Daniel Newman: I don’t want to be underdressed ever though. No worries. Well, Jim, you heard the kind of preamble. I mean, look, AI is taking over the world, and I know in your world, in the workstation space with Z by HP. You’ve got the Creation Center, you’re really pushing and enabling the developer community. Talk about how Z by HP is enabling that particular group. Talk about how you’re thinking about making things easier for AI development.

Jim Nottingham: Yeah, great question. So we’ve actually been working with data scientists for several years now. Of course, it’s been accelerating and growing as the capabilities come online. And one thing we’ve heard fairly consistently in the early days and even now directly from customers and with a lot of the studies we’ve done with some of our partners, we’ve commissioned some studies, is whether they’re general practitioners or the best of the best specialists, they’re not entirely happy with the tools that they have for creating AI model.

And it goes beyond just generative AI. All the model development, machine learning, there’s lots of opportunities to address pain points and make it easier. At the end of the day, as much as we love our workstations and we do, and we think our customers do too, it’s more than just delivering a product with the best specs. They really need solutions that are going to help streamline their workflows and make it easier for them to deliver models that they trust.

Patrick Moorhead: For sure. And I was just talking about the developers. I’m glad you added in data scientists because at least what we’re seeing in our research, one of the biggest challenges, particularly for enterprises to get generative AI going is the data itself. And when you have the data scientist matched with the front-end developer and the back-end developer, this is where all this magic is happening.

One of the challenges that is loud and clear, it’s funny, we’ve heard in the cloud there’s just not enough GPUs in the cloud or they’re hard to share, but guess what? If you are a hardcore developer, you need to have some serious GPU power on premises on your workstation, and at least up to now, until Z by HP Boost, you weren’t able to share those GPU resources. Tell us about Boost and tell us a little bit about the real problems you’re solving.

Jim Nottingham: Yeah, great question. So this came from just working with developers. Number one, they’re buying workstations. They start a lot of their model development, even the stuff that they’re going to scale to the super cloud, they start a lot of the development on a workstation. They do it because they get the one-on-one, they get the good GPUs, whichever flavor they want to use for what they’re doing.

And what we learned really in working with people that are doing stuff in the cloud, the wait times for GPUs when you want to use multiple GPUs, the wait times and the cost, we didn’t see exactly that because with their workstations, they just used their workstation fleets. But what we realized was, “Hey, when the majority of the team is working, it’s not always at the same time, 24.7.” There’s a lot of idle GPUs sitting there.

And it was less about, “Hey, we want to make these more available.” It was more like, “Hey, we could really streamline your workflow, accelerate your workflow, give you many more iterations if you could just take advantage of those.” And the idea at the time was, “Oh, wouldn’t it be great if you could just use all those GPUs sitting there?” It was like, “Yeah, you can’t do that.” We found a way that you can. And so with Z by HP Boost, it’s really a way to significantly accelerate the number of iterations they can get out of their workstation fleets.

Patrick Moorhead: So just a clarifying question here, is this somebody, I might have a developer in the United States versus developer in the UK, and when the GPUs aren’t being used in the United States, people from other places can use them?

Jim Nottingham: It will work that way. In fact, it’ll work remotely. I mean, it really depends on how you want to use it. The ideal use case is really where you have fleets of workstations. And those fleets can be typically, that’s behind the firewall. It doesn’t have to be, but it can be, and it can be in different locations behind the firewall. It just makes it very easy to basically share and utilize those GPUs that are sitting idle.

But so yes, you can get the 24/7 geographic advantage that we get with remote computing, for example. This is different than remote computing. This is more about, “Hey, I’ve got a fleet of workstations right here, and if I’ve got GPUs available, I can significantly accelerate my AI workloads just by using those that are there.”

Daniel Newman: But for a typical enterprise that’s not doing massive LLM training, but using doing some fundamental day-to-day AI, this becomes a little on-prem or private cluster. Am I equating it right?

Jim Nottingham: It does. It becomes, yeah, it’s basically decentralizing what you get in the data center.

Daniel Newman: But I mean, there’s some efficiencies, right?

Jim Nottingham: Absolutely.

Daniel Newman: So for a lot of people that are spending big dollars doing cloud, you have these GPUs and these resources available and you can turn them on and use them for some enterprise training and inference needs.

Jim Nottingham: Yeah, absolutely. And we’re looking at supporting the ability to, “Yeah, I want to share the GPUs.” And for those companies that have workstations and they have access to the hyperscalers in the future, that’s something we also want to support is you can borrow from here or there.

Patrick Moorhead: It’s funny, one of those solutions that you knew had to come, and I think it’s pretty cool that you brought that to the table. So essentially, if I have a certain site here in the United States, even in one building, and you are sure. You’re doing some fine-tuning, you’re working on a small language model, but you just need, literally you need a boost, okay, are able to tap into this GPU, this GPU, this GPU, and this GPU in an orchestrated manner, not a free-for-all. I mean, there’s value in the cloud in it as we see people finding better to do it, and there’s absolute value on premises with the developer as well. Super exciting. There’s also Z by HP Gen AI as well. Can you talk about how those incremental features help improve the day-to-day, productivity, efficiency, or insightfulness quality related to the workflow?

Jim Nottingham: Excellent question. Yeah. So again, working with customers, and this has been true for a while, it’s certainly true today. And again, not just from talking with customers, but in the broad surveys we’ve done like top of mind for every customer is the concern about, “Can I trust the output of my model and then the output of my application?”

And so we recognized this was a big deal, and we started looking at solutions. We ended up partnering with Galileo to get a solution that works great with AI Studio in the AI creation center, but it lets customers detect and correct for hallucinations, bias drift, to really give them the confidence in the models that they’re using. And that’s an exciting area that we think has a really long runway because I think trust is going to continue to grow in importance as the complexity grows and as the sophistication, if you will, of the tasks that it can do grows, it’s going to be more and more important.

Daniel Newman: Yeah, it’s really interesting. As we sort of wrap up, I’d love to get a little bit of the futurist’s view from you of how does this play out? Today, we’ve talked a little bit about how you’re making AI more accessible to developers, to data scientists. You’re making GPUs more available. You’re maybe even helping enterprises augment, maybe save some costs when they don’t need to send something up to the cloud. And of course now you’re using gen AI to enable and empower all of these people to be even more efficient. So what does the business do next, Jim?

Jim Nottingham: Well, we have-

Daniel Newman: Spill.

Patrick Moorhead: Or just give us-

Daniel Newman: Or just whatever, whatever’s interesting.

Jim Nottingham: Directionally, for sure. I would say that along the lines of what you see with Boost, we see big opportunities to really make it easier for our customers to take advantage of the hardware that they have. More importantly, we see big opportunities to enable them with their workstations to significantly increase the performance, if you will, by increasing the number of iterations you can get out of your workstation fleets.

And I would say looking at the future through the lens of our innovation pipeline, the future looks bright, that I can tell you. But I think you’ll see more and more that the world is converging on this hybrid compute model where you’re going to take advantage of all the benefits of the cloud, and there’s many. Scalability, flexibility, there’s tons of advantages of local, and there’s tons of advantages everywhere in between. The more you can make your solutions streamlined, seamless across that hybrid compute to take advantages of whether it’s local or cloud, the more value you can unlock for customers. And that’s our mission.

Patrick Moorhead: Yeah, I mean, listen, we’re seeing that today with the hybrid cloud. Hybrid AI is coming along with that. We’re seeing full cloud stacks that take advantage of this. Of course, it would make sense that this would happen on the client as well, right? Not just the local server, but the local workstation as well. Makes sense.

Daniel Newman: Well, Jim, I want to thank you so much for spending a little time with Patrick and I here. It’s HP Imagine week and lots of news, lots of exciting innovations going on, and of course we’re sitting where the innovation began, but certainly not where it ends. Let’s have you back soon.

Jim Nottingham: Yeah, thank you. I really appreciate you guys having me, and it’s actually extra cool being in this garage.

Patrick Moorhead: You’re working on cool stuff, very innovative.

Jim Nottingham: Great, great backdrop.

Daniel Newman: Keep it up and we’ll see you soon, Jim. And all of you out there, please keep up watching all of The Six Five coverage here at HP Imagine. We appreciate you tuning in. Check out the other videos, check out all of the coverage from The Six Five across all the technology areas. Pat and I like to riff and talk about all things technology, but for this episode, it’s time to say goodbye. Appreciate you tuning in. We’ll see you next time.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Urmila Kukreja, Director at Smartsheet, shares her insights on leveraging General AI to redefine collaborative work management, transforming how businesses operate efficiently.
Praerit Garg, President of Product and Innovation at Smartsheet, joins David Nicholson to share his insights on driving innovation at Smartsheet and how they prioritize customer feedback in shaping product development.
Miya McClain, VP at Smartsheet, shares her insights on the enhanced Smartsheet user experience, highlighting the role of GenAI and exciting new features that promise to keep Smartsheet at the forefront of CWM technology.
Todd Lewellen and Megan Amdah offer CIOs guidance on AI PCs, including the advantages, preparation needs, and security considerations for organizations looking to innovate.