Search
Close this search box.

How Lamini Optimized LLM Finetuning System on AMD GPUs

How Lamini Optimized LLM Finetuning System on AMD GPUs

The News: Lamini CTO Greg Diamos popped open the hood and gave a detailed look at the engine behind its AMD graphics processing unit (GPU)-driven LLM Superstation in a blog post. You can read the blog on the Lamini website.

How Lamini Optimized LLM Finetuning System on AMD GPUs

Analyst Take: Lamini raised eyebrows last month when the large language model (LLM) finetuning specialist revealed it built its LLM Superstation on AMD Instinct MI210 and MI250 accelerators rather than wait for NVIDIA H100s. The H100 GPUs have a 52-week lead time. LLM Superstation is available now, both in the cloud and on-premises.

At the time of announcement, Diamos said AMD’s ROCm GPU programming software has achieved parity with NVIDIA CUDA for LLMs. In his follow-up blog, Diamos disclosed how Lamini built its optimized finetuning system on AMD Instinct GPUs, and the technical hurdles it had to overcome.

Lamini started by running a single MI100 system last December before AMD donated two MI210 servers. The pre-seed startup set them up in a janitor’s closet with ventilation and sound insulation.

It eventually grew into a data center at CoreSite, with a typical configuration of 4 x MI250 GPU servers. That provided 512 GB of high-bandwidth memory (HBM), which can fit a ~200 billion parameter LLM in bfloat16. Lamini used shared NFS servers, about 400 TB, and a high-performance network between GPU servers. Upcoming AMD Instinct MI300X GPUs with 192 GB of HBM will allow further scaling.

Now Lamini is in phase two of its finetuning, which involves integrating 128 AMD MI200 GPUs into the platform. This phase requires using layers of specialized software inside ROCm as well as building new layers. These layers include the GPU, an AMDgpu Linux driver, and optimized libraries. The Lamini SDK sits on top of the layers, adding LLM use cases such as chat, classification, and autocomplete. The SDKs use Python and TypeScript application programming interface (API) clients for loading, querying, and finetuning LLMs.

Diamos said Lamini added new optimizations that accelerate LLMs and take advantage of unique capabilities of AMD’s MI platform. These optimizations enable hosting 200 billion parameter models on a single server and 10,000 finetuned language models on one server, handling 12,800 simultaneous requests to a single server and processing over 3.5 million queries per day on one node.

The way Diamos sees it, ROCm has “enormous potential” to go beyond CUDA for LLM finetuning, and Lamini and ROCm can efficiently finetune the largest LLMs, such as Meta AI’s Llama 2. If he is correct, the combination can significantly change the LLM training market.

As The Futurum Group colleague Mark Beccue wrote in September: “Despite some limitations, the AMD-Lamini Superstation is a viable option for enterprises to consider for deploying LLMs. Savvy enterprises that cannot wait a calendar year to run AI workloads will be test-driving the system.” It will not be easy for AMD or any other competitor to dent NVIDIA’s dominance in the GPU market, but AMD can at least give data scientists and IT teams a viable option.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from Futurum Group:

AMD and Hugging Face Team Up to Democratize AI Compute – Shrewd Alliance Could Lead to AI Compute Competition, Lower AI Costs

AMD Revenue Hits $5.4 Billion in Q2, Down 18% YoY, But Beats Estimates

NVIDIA,GlobalFoundries, Zoom, Salesforce, Groq, Qualcomm

Author Information

Dave’s focus within The Futurum Group is concentrated in the rapidly evolving integrated infrastructure and cloud storage markets. Before joining the Evaluator Group, Dave spent 25 years as a technology journalist and covered enterprise storage for more than 15 years. He most recently worked for 13 years at TechTarget as Editorial Director and Executive News Editor for storage, data protection and converged infrastructure. In 2020, Dave won an American Society of Business Professional Editors (ASBPE) national award for column writing.

His previous jobs covering technology include news editor at Byte and Switch, managing editor of EdTech Magazine, and features and new products editor at Windows Magazine. Before turning to technology, he was an editor and sports reporter for United Press International in New York for 12 years. A New Jersey native, Dave currently lives in northern Virginia.

Dave holds a Bachelor of Arts in Communication and Journalism from William Patterson University.

SHARE:

Latest Insights:

Sovereign Cloud Deployments: The Race Among Hyperscalers
Steven Dickens, Chief Technology Advisor at The Futurum Group, shares insights on Oracle’s US$6.5 billion investment in Malaysia's sovereign cloud. This move, alongside strategic hyperscaler partnerships, positions Oracle to lead in AI innovation and regulated cloud deployments.
VAST Data Adds to Its AI Capabilities With New InsightEngine Targeting RAG Workloads
Mitch Lewis, Research Analyst, Camberley Bates, CTA, and Mitch Ashley, CTA, at The Futurum Group share their analysis on the VAST Data’s InsightEngine with NVIDIA announcements.
Krista Case, Research Director at The Futurum Group, overviews NetApp Insight 2024.
HPE Aruba Networking Central: Now Scintillating Yet Smoothing
The Futurum Group’s Ron Westfall examines why the new HPE Aruba Networking Central solution can deliver the purpose-built AI, contextual observability, architectural expandability, and improved configurability key to swiftly improving network management, security, performance, and visibility.