Menu

How Lamini Optimized LLM Finetuning System on AMD GPUs

How Lamini Optimized LLM Finetuning System on AMD GPUs

The News: Lamini CTO Greg Diamos popped open the hood and gave a detailed look at the engine behind its AMD graphics processing unit (GPU)-driven LLM Superstation in a blog post. You can read the blog on the Lamini website.

How Lamini Optimized LLM Finetuning System on AMD GPUs

Analyst Take: Lamini raised eyebrows last month when the large language model (LLM) finetuning specialist revealed it built its LLM Superstation on AMD Instinct MI210 and MI250 accelerators rather than wait for NVIDIA H100s. The H100 GPUs have a 52-week lead time. LLM Superstation is available now, both in the cloud and on-premises.

At the time of announcement, Diamos said AMD’s ROCm GPU programming software has achieved parity with NVIDIA CUDA for LLMs. In his follow-up blog, Diamos disclosed how Lamini built its optimized finetuning system on AMD Instinct GPUs, and the technical hurdles it had to overcome.

Lamini started by running a single MI100 system last December before AMD donated two MI210 servers. The pre-seed startup set them up in a janitor’s closet with ventilation and sound insulation.

It eventually grew into a data center at CoreSite, with a typical configuration of 4 x MI250 GPU servers. That provided 512 GB of high-bandwidth memory (HBM), which can fit a ~200 billion parameter LLM in bfloat16. Lamini used shared NFS servers, about 400 TB, and a high-performance network between GPU servers. Upcoming AMD Instinct MI300X GPUs with 192 GB of HBM will allow further scaling.

Now Lamini is in phase two of its finetuning, which involves integrating 128 AMD MI200 GPUs into the platform. This phase requires using layers of specialized software inside ROCm as well as building new layers. These layers include the GPU, an AMDgpu Linux driver, and optimized libraries. The Lamini SDK sits on top of the layers, adding LLM use cases such as chat, classification, and autocomplete. The SDKs use Python and TypeScript application programming interface (API) clients for loading, querying, and finetuning LLMs.

Diamos said Lamini added new optimizations that accelerate LLMs and take advantage of unique capabilities of AMD’s MI platform. These optimizations enable hosting 200 billion parameter models on a single server and 10,000 finetuned language models on one server, handling 12,800 simultaneous requests to a single server and processing over 3.5 million queries per day on one node.

The way Diamos sees it, ROCm has “enormous potential” to go beyond CUDA for LLM finetuning, and Lamini and ROCm can efficiently finetune the largest LLMs, such as Meta AI’s Llama 2. If he is correct, the combination can significantly change the LLM training market.

As The Futurum Group colleague Mark Beccue wrote in September: “Despite some limitations, the AMD-Lamini Superstation is a viable option for enterprises to consider for deploying LLMs. Savvy enterprises that cannot wait a calendar year to run AI workloads will be test-driving the system.” It will not be easy for AMD or any other competitor to dent NVIDIA’s dominance in the GPU market, but AMD can at least give data scientists and IT teams a viable option.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from Futurum Group:

AMD and Hugging Face Team Up to Democratize AI Compute – Shrewd Alliance Could Lead to AI Compute Competition, Lower AI Costs

AMD Revenue Hits $5.4 Billion in Q2, Down 18% YoY, But Beats Estimates

NVIDIA,GlobalFoundries, Zoom, Salesforce, Groq, Qualcomm

Author Information

Dave focuses on the rapidly evolving integrated infrastructure and cloud storage markets.

Related Insights
Collapsing the Stack VAST Data’s Bid to Own the AI Data Loop
February 27, 2026

Collapsing the Stack: VAST Data’s Bid to Own the AI Data Loop

Brad Shimmin, Vice President at Futurum, analyzes the VAST Data platform updates from VAST Forward, detailing how the new Policy Engine, Tuning Engine, and Polaris architectures are simplifying the AI...
Workday Q4 FY 2026 Earnings Mark AI Agent Push Amid Slight Outlook Miss
February 27, 2026

Workday Q4 FY 2026 Earnings Mark AI Agent Push Amid Slight Outlook Miss

Keith Kirkpatrick, VP and Research Director at Futurum, analyzes Workday’s Q4 FY 2026 earnings, focusing on the company’s agentic AI product direction, commercial attach signals in expansions....
Will ServiceNow's Autonomous Workforce Redraw the Map for Enterprise AI Execution
February 27, 2026

Will ServiceNow’s Autonomous Workforce Redraw the Map for Enterprise AI Execution?

Keith Kirkpatrick, VP & Research Director at Futurum, covers ServiceNow’s announcement of its Autonomous Workforce, and discusses the implications for organizations seeking to use AI agents to handle L1 service...
Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another
February 27, 2026

Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another?

Futurum’s Alastair Cooke shares his insights on new HPE research that finds that only 5% of enterprises are fully prepared for the so-called Great Virtualization Reset, even as two-thirds plan...
IonQ Q4 FY 2025 Results Highlight Commercial Expansion And Platform Breadth
February 27, 2026

IonQ Q4 FY 2025 Results Highlight Commercial Expansion And Platform Breadth

Futurum Research reviews IonQ’s Q4 FY 2025 earnings, focusing on commercial expansion signals, platform positioning across quantum domains, and implications for enterprise adoption paths....
NVIDIA Q4 FY 2026 Earnings Highlight Durable AI Infrastructure Demand
February 27, 2026

NVIDIA Q4 FY 2026 Earnings Highlight Durable AI Infrastructure Demand

Futurum’s Nick Patience analyzes NVIDIA’s Q4 FY 2026 earnings, highlighting data center scale, networking expansion, and agentic AI adoption shaping AI infrastructure demand....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.