How Lamini Optimized LLM Finetuning System on AMD GPUs

How Lamini Optimized LLM Finetuning System on AMD GPUs

The News: Lamini CTO Greg Diamos popped open the hood and gave a detailed look at the engine behind its AMD graphics processing unit (GPU)-driven LLM Superstation in a blog post. You can read the blog on the Lamini website.

How Lamini Optimized LLM Finetuning System on AMD GPUs

Analyst Take: Lamini raised eyebrows last month when the large language model (LLM) finetuning specialist revealed it built its LLM Superstation on AMD Instinct MI210 and MI250 accelerators rather than wait for NVIDIA H100s. The H100 GPUs have a 52-week lead time. LLM Superstation is available now, both in the cloud and on-premises.

At the time of announcement, Diamos said AMD’s ROCm GPU programming software has achieved parity with NVIDIA CUDA for LLMs. In his follow-up blog, Diamos disclosed how Lamini built its optimized finetuning system on AMD Instinct GPUs, and the technical hurdles it had to overcome.

Lamini started by running a single MI100 system last December before AMD donated two MI210 servers. The pre-seed startup set them up in a janitor’s closet with ventilation and sound insulation.

It eventually grew into a data center at CoreSite, with a typical configuration of 4 x MI250 GPU servers. That provided 512 GB of high-bandwidth memory (HBM), which can fit a ~200 billion parameter LLM in bfloat16. Lamini used shared NFS servers, about 400 TB, and a high-performance network between GPU servers. Upcoming AMD Instinct MI300X GPUs with 192 GB of HBM will allow further scaling.

Now Lamini is in phase two of its finetuning, which involves integrating 128 AMD MI200 GPUs into the platform. This phase requires using layers of specialized software inside ROCm as well as building new layers. These layers include the GPU, an AMDgpu Linux driver, and optimized libraries. The Lamini SDK sits on top of the layers, adding LLM use cases such as chat, classification, and autocomplete. The SDKs use Python and TypeScript application programming interface (API) clients for loading, querying, and finetuning LLMs.

Diamos said Lamini added new optimizations that accelerate LLMs and take advantage of unique capabilities of AMD’s MI platform. These optimizations enable hosting 200 billion parameter models on a single server and 10,000 finetuned language models on one server, handling 12,800 simultaneous requests to a single server and processing over 3.5 million queries per day on one node.

The way Diamos sees it, ROCm has “enormous potential” to go beyond CUDA for LLM finetuning, and Lamini and ROCm can efficiently finetune the largest LLMs, such as Meta AI’s Llama 2. If he is correct, the combination can significantly change the LLM training market.

As The Futurum Group colleague Mark Beccue wrote in September: “Despite some limitations, the AMD-Lamini Superstation is a viable option for enterprises to consider for deploying LLMs. Savvy enterprises that cannot wait a calendar year to run AI workloads will be test-driving the system.” It will not be easy for AMD or any other competitor to dent NVIDIA’s dominance in the GPU market, but AMD can at least give data scientists and IT teams a viable option.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from Futurum Group:

AMD and Hugging Face Team Up to Democratize AI Compute – Shrewd Alliance Could Lead to AI Compute Competition, Lower AI Costs

AMD Revenue Hits $5.4 Billion in Q2, Down 18% YoY, But Beats Estimates

NVIDIA,GlobalFoundries, Zoom, Salesforce, Groq, Qualcomm

Author Information

Dave focuses on the rapidly evolving integrated infrastructure and cloud storage markets.

Related Insights
SAP Q1 FY 2026 Earnings Show Cloud ERP Suite Acceleration
April 27, 2026

SAP Q1 FY 2026 Earnings Show Cloud ERP Suite Acceleration

Futurum Research reviews SAP Q1 FY 2026 earnings, focusing on cloud ERP Suite momentum, the path to trusted business AI, and what SAP’s guidance implies for enterprise software planning....
Cohere Acquires Aleph Alpha: A Deal Born of Sovereignty & Necessity
April 27, 2026

Cohere Acquires Aleph Alpha: A Deal Born of Sovereignty & Necessity

Nick Patience, VP & Practice Lead at Futurum, analyses the Cohere and Aleph Alpha deal, examining strategic synergies, sovereign AI positioning, and key integration risks for the combined transatlantic enterprise...
Meta’s AWS Pact Reframes the Graviton CPU as an AI Workhorse
April 27, 2026

Meta’s AWS Pact Reframes the Graviton CPU as an AI Workhorse

Brendan Burke, Research Director at Futurum, examines Meta's agreement with AWS to deploy tens of millions of Graviton cores, signaling a shift toward purpose-built CPUs as critical infrastructure for scaling...
Will Edison International’s Board Refresh Accelerate Its AI and Digital Ambitions?
April 25, 2026

Will Edison International’s Board Refresh Accelerate Its AI and Digital Ambitions?

Edison International appoints M. Susan Hardwick as independent director, strengthening the utility's leadership as it confronts mounting pressure to modernize operations and leverage AI-driven infrastructure solutions....
Will GPT-5.5 Redefine Enterprise AI, or Hit the Limits of Trust and Control?
April 25, 2026

Will GPT-5.5 Redefine Enterprise AI, or Hit the Limits of Trust and Control?

OpenAI's GPT-5.5 launches as a transformative enterprise AI platform, yet adoption barriers around trust, reliability, and data privacy remain critical concerns for 78% of organizations planning AI budget increases....
GPT-5.5 Raises the Stakes: Can OpenAI Maintain Its Lead as Enterprise AI Matures?
April 25, 2026

GPT-5.5 Raises the Stakes: Can OpenAI Maintain Its Lead as Enterprise AI Matures?

OpenAI's GPT-5.5 launch marks a critical moment in enterprise AI adoption. With 68% of organizations at advanced GenAI stages, competition from Microsoft and Google intensifies as buyers prioritize reliability and...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.