NVIDIA AI Workbench Could Simplify Generative AI Builds

NVIDIA AI Workbench Could Simplify Generative AI Builds

The News: On August 8 at SIGGRAPH, NVIDIA announced NVIDIA AI Workbench, a toolkit designed to streamline the generative AI application building process for developers. The new toolkit, paired with NVIDIA AI Enterprise 4.0 software, form a simplified path for generative AI builds.

Here are the pertinent details:

  • Accessed through a simplified interface running on a local system, NVIDIA AI Workbench enables developers to customize AI models from repositories like Hugging Face, GitHub, or NVIDIA’s NGC using custom data. The models can then be shared across multiple platforms.
  • NVIDIA AI Workbench tackles a significant issue for enterprises working on AI projects. Thousands of pretrained models are available, but to customize them with open-source tools can require hunting through multiple online repositories for the right framework, tools, and containers. AI Workbench allows developers to pull together enterprise-grade models, frameworks, SDKs, and libraries into a unified developer toolkit.
  • Developers with Windows or Linux-based NVIDIA RTX PCs or workstations can operate AI Workbench locally.
  • NVIDIA AI Enterprise 4.0, the latest version of NVIDIA AI Enterprise software, lets users build and run NVIDIA AI-enabled solutions across the cloud, data center, and edge. Version 4.0 now supports NVIDIA NeMo (end-to-end support for building, customizing, and deploying large language model [LLM] applications), Triton Management Service (automates production deployments), and more.

Read the full Press Release about NVIDIA AI Workbench on the NVIDIA website.

NVIDIA AI Workbench Could Simplify Generative AI Builds

Analyst Take: With its firm leadership in GPU compute, NVIDIA is positioned in an enviable spot within the AI market ecosystem. But the company is always looking for ways to improve upon their success and a key strategy for doing so is to help accelerate the AI market. NVIDIA AI Workbench and AI Enterprise 4.0 are just the latest initiatives NVIDIA has launched in that regard. How impactful will they be? Here are the key takeaways related to NVIDIA’s strategic moves in this space:

NVIDIA Has Identified a Generative AI Market Barrier

It is important to remember how new and explosive the generative AI movement is. To review briefly: before October 2022, some enterprises were working to build proprietary AI applications and systems, though it required specific expertise in data science and data engineering, a very limited resource. Generative AI platforms introduced the democratized interface – now AI models can simply be told what to do and do not require AI expertise to guide them (theoretically, now there is movement in prompt engineering, but that is not traditional data science). This capability quickly expanded the market of enterprises who could work with AI, since data scientists and data engineers were not required to interface with the models. In addition, the number of models and other generative AI development framework tools exploded, available from multiple resources. The models themselves, as NVIDIA points out with NVIDIA AI Workbench, are only part of building generative AI applications – developers need frameworks, SDKs, and libraries and with open source, and those elements are scattered. The combination of new personnel and new, abundant, scattered tools means generative AI projects can move slower than needed. NVIDIA AI Workbench addresses this.

Help for Generative AI Developers With Caveats, Part 1

There may be limitations to where NVIDIA Workbench will operate in the cloud and locally. The announcement speaks specifically to the availability of the solution locally for customers who have Windows or Linux-based NVIDIA RTX PCs or workstations. So, how widespread will the solution be?

Help for Generative AI Developers With Caveats, Part 2

There may be limitations to where NVIDIA AI Enterprise 4.0 software runs. “NVIDIA AI Enterprise software — which lets users build and run NVIDIA AI-enabled solutions across the cloud, data center and edge — is certified to run on mainstream NVIDIA-Certified Systems, NVIDIA DGX systems, all major cloud platforms, and newly announced NVIDIA RTX workstations.” It is unclear where it will not run and what the core requirements are for the system. This is not an overwhelming issue, just a question.

NVIDIA Is Hedging Bets to Supply Generative AI Compute

Perhaps the most intriguing issue is this – are these initiatives an NVIDIA strategy to scale its cloud AI compute? Many industry watchers are concerned that the compute workloads required for generative AI put pressure on the physical number of GPUs the market can produce. One way to address this supply and demand issue is for NVIDIA to leverage its power as a cloud compute option. In theory, cloud services might represent higher margins than hardware sales margins. Either way, it is a deft diversification strategy.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

NVIDIA & Snowflake

NVIDIA Q1 Earnings

Google, NVIDIA, Qualcomm Spar on AI Domination

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.