Search

Democratizing AI Infrastructure – Futurum Tech Webcast

Democratizing AI Infrastructure - Futurum Tech Webcast

On this episode of the Futurum Tech Webcast, we are joined by Dell TechnologiesSeamus Jones and Scalers AI‘s Steen Graham. Host, David Nicholson, Chief Research Officer at The Futurum Group, delves into a conversation on how these leaders are leveraging their expertise and organizational capabilities to democratize AI infrastructure.

Their discussion covers:

  • The current state and challenges of AI infrastructure
  • Strategies for making AI accessible to a broader audience
  • Innovative solutions from Dell Technologies and Scalers AI to support AI democratization
  • The impact of democratized AI on various industries
  • Future trends in AI infrastructure and democratization

Learn more at Dell Technologies and Scalers AI.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Or grab the audio on your streaming platform of choice here:

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Transcript:

Dave Nicholson: I’m Dave Nicholson, Chief Research Officer at The Futurum Group, and I’m joined by two distinguished individuals today, Mr. Seamus Jones, Director of Tech Marketing, Dell.

Seamus Jones: Afternoon.

Dave Nicholson: This is where you live?

Seamus Jones: Yep, this is our home. Welcome.

Dave Nicholson: And CEO of Scalers AI, Steen Graham. Good to see you, Steen.

Steen Graham: Great to see you, Dave, as well as always.

Dave Nicholson: You’ve put together some reference implementations, and we’re going to talk about what that means specifically with Scalers AI. But I want to start out with you, Seamus. Why did you engage Scalers AI to do what we did here?

Seamus Jones: Absolutely.

Dave Nicholson: We’re going to get into the details, but what was the thinking behind this?

Seamus Jones: We started in on this journey must’ve been two years ago, something like that, the thought process for our team was that, look, we have the best of breed within the infrastructure, within the framework to be able to build out this machine learning and artificial intelligence frameworks. We really wanted to find a partner who could understand the fast-moving software stack and make sure that we can get the most out of the hardware and infrastructure that we were bringing to market. The thought process behind this whole framework that we’re putting together is the fact that, look, customers are being challenged every day trying to implement, “Look, how do I make AI real? How do I make it real for my business? How do I make it a repeatable process that can be implemented and make it a tangible deployment?”

And we wanted to try and establish not just the fact that, look, you can do it and it’s best of breed, but the fact that it’s replicable I’d say, in a customer’s environment by using GitHub code that we’ve posted out on the Dell GitHub using white papers to try and articulate exactly what’s happening in the space. And really, we’ve done a lot of things that have been first in the market and best of breed.

Dave Nicholson: Tell us about Scalers AI. First of all, just at a foundational level, what is Scalers AI all about?

Steen Graham: Yeah, Scalers AI was a company that we founded to help fast-track industry transformation with AI. And it’s kind of Seamus alluded to, one of the things that Dell has is incredible infrastructure across a diverse set of requirements. And Dell’s extremely motivated to move fast with their infrastructure and to lead the market in this innovation. And that’s why we were thrilled to work with Dell and then pairing that leading edge infrastructure with the latest in innovations and enterprise AI software is something we were really thrilled about. So driving industry transformation’s our DNA and doing that with full stack solutions is something we’re really proud of so we can make sure that they’re repeatable for enterprises.

Dave Nicholson: We’re talking about reference implementations across a variety of industry verticals, but we started out looking at inference and training in kind of a little bit of a different way. What’s the main thing, if you had to sum up all of this together, that we were trying to get across here?

Steen Graham: Yeah, I think if you look at kind of Dell leadership team said they really wanted to deploy AI solutions with companies proprietary information at the edge and give them choice. So the first thing we wanted to do is we wanted to show enterprises how they could take their proprietary data and be able to fine tune or train a model with their proprietary data secure on-prem with leading PowerEdge infrastructure. Building that fine-tuning stack across multiple GPU vendors on PowerEdge hardware, pre-validating that solution and then making that solution code available for anybody that wants to deploy PowerEdge on-prem across GPU infrastructure, really truly a pretty incredible stack that’s available on the Dell GitHub today.

And then on the inferencing side, what enterprises want to do today is they want to be able to deploy inference affordably. There’s a lot of misperception that you need the best GPUs in the market to do just that. And I think there’s a lot of workloads where you do really need the best GPU in the market, but most enterprises aren’t running 24/7, simultaneous hundreds of thousands of users, right? And they’ve got cycles of downtime. And how do they take advantage of their existing PowerEdge infrastructure and new PowerEdge infrastructure they’re going to get as well and be able to deploy inferencing across CPU that they already have deployments of and GPU.

Seamus Jones: We’re looking at the market space and it’s changing. As more vendors enter into the market, we’re going to partner with them, we’re going to bring them into our portfolio of products. We validate them, certify them, and then bring them out. The biggest thing is that we are meeting customers, wherever they are. So if they have the largest of AI implementations, we can accommodate that right down to the mom and pop shops that have an edge implementation in a retail environment to determine inventory control or using computer vision. We can accommodate that as well.

So you are able to take advantage of all the experience that we’ve put together on those large deployments and then implement that experience, knowledge, expertise, and customer smaller ones. We’ve done that in a framework called a Dell Validated design. I think having limited supply of some of these component parts has caused customers to make unnatural choices in what they want to deploy within their environment. But having the choice of supply is going to mean that we can offer things that no other vendor in the marketplace can, right?

Steen Graham: There’s an emerging renaissance in getting hardware right that AI’s driven. And then there’s an emerging need for privacy and security of your proprietary data. And I think at least at Scalers AI, I think we’re extremely well-positioned to deliver customers across a portfolio of choice on their own on-premise infrastructure there’s a tremendous amount of opportunity with the Dell Validated designs, the Dell Reference designs, the Dell GitHub repo with solution code. The work we do, there’s affordable off-the-shelf solutions that they can drive their business transformation with.

Dave Nicholson: What would you say the most common misconceptions are when people think about AI? From your perspective?

Seamus Jones: While you can deploy AI on standard systems that customers have today, having those high-end GPUs does make a difference. And having that performance capability does make a difference in customer’s estates. The knock-on impact though that a lot of customers don’t take into consideration is their power thresholds within their data center and infrastructure and the cooling requirements needed, right? Those are things that we have experience and expertise in and can help customers navigate through.

Steen Graham: This is why hardware’s at the center of innovation again, and there’s this massive renaissance and hardware.

Dave Nicholson: I’m so delighted by the fact that hardware is cool again. Steen, Seamus, thanks for joining me here from the Dell Experience Lounge. Doesn’t look like we’re in… Well, kind of like lounge bar stool-ish. Thanks for joining us.

Author Information

David Nicholson is Chief Research Officer at The Futurum Group, a host and contributor for Six Five Media, and an Instructor and Success Coach at Wharton’s CTO and Digital Transformation academies, out of the University of Pennsylvania’s Wharton School of Business’s Arresty Institute for Executive Education.

David interprets the world of Information Technology from the perspective of a Chief Technology Officer mindset, answering the question, “How is the latest technology best leveraged in service of an organization’s mission?” This is the subject of much of his advisory work with clients, as well as his academic focus.

Prior to joining The Futurum Group, David held technical leadership positions at EMC, Oracle, and Dell. He is also the founder of DNA Consulting, providing actionable insights to a wide variety of clients seeking to better understand the intersection of technology and business.

SHARE:

Latest Insights:

Nivas Iyer, Sr. Principal Product Manager at Dell Technologies, joins Paul Nashawaty to discuss the transition from VMs to Kubernetes and the strategies to overcome emerging data storage challenges in modern IT infrastructures.
Shimon Ben David, CTO at WEKA, joins Dave Nicholson and Alastair Cooke to share his insights on how WEKA's innovative solutions, particularly the WEKApod Data Platform Appliance, are revolutionizing storage for AI workloads, setting a new benchmark for performance and efficiency.
The Futurum Group team assesses how the global impact of the recent CrowdStrike IT outage has underscored the critical dependency of various sectors on cybersecurity services, and how this incident highlights the vulnerabilities in digital infrastructure and emphasizes the necessity for robust cybersecurity measures and resilient deployment processes to prevent widespread disruptions in the future.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss CrowdStrike Global meltdown, Meta won't do GAI in EU or Brazil, HP Imagine AI 2024, TSMC Q2FY24 earnings, AMD Zen 5 Tech Day, Apple using YouTube to train its models, and NVIDIA announces Mistral NeMo 12B NIM.