Menu

SC23 Recap: Groq

SC23 Recap: Groq

The News: Groq attended SC23 showcasing its language processing unit (LPU) and its recent large language model (LLM) performance record of 300 tokens per second per user. Learn more about the Groq LPU on the company website.

SC23 Recap: Groq

Analyst Take: In today’s computing landscape, it seems as if there is a new unique processing unit for everything. Beyond the standard CPU, and the increasingly common graphics processing unit (GPU), there are accelerated processing units (APUs), data processing units (DPUs), tensor processing units (TPUs), and more – almost an endless list of PUs.

But one of the more intriguing offerings is the LPU developed by Groq. Dubbed the GroqChip, the Groq LPU is designed specifically for acceleration and precision in computationally intensive AI inferencing applications such as LLMs. The GroqChip reduces both memory and compute bottlenecks to help language models accelerate the computation of each word and rapidly generate AI output.

At SC23, the company demonstrated just how fast Groq’s LPU could accelerate LLMs – and it certainly was fast. But my anecdotal experience is not the only proof of Groq’s impressive performance. Shortly before SC23, Groq announced a new AI performance record of 300 tokens per second per user. The test was achieved running Meta’s Llama-2 70B LLM on Groq’s LPU.

Along with impressive hardware performance, Groq’s LPU is accompanied by a robust software stack to support its developers. Groq’s software includes a Groq compiler for out-of-the-box support of standard deep learning models, Groq application programming interface (API) for more fine-grained support for custom applications, and profiling tools to visualize the chip’s usage and estimate performance.

Groq’s presence at SC23 was boosted by another factor. While neither hardware nor software related, or technology related at all, I would be remiss not to mention the live llama that Groq paraded around downtown Denver. An ode to the Llama-2 LLM that which Groq utilized to showcase its record-breaking performance, the llama was a great display and certainly made Groq a memorable exhibitor at SC23.

SC23 Recap: Groq
Groq’s Llama Display (Image Source: Groq)

As AI and LLMs continue to develop, so will the requirements for performance, accuracy, and scalability. While there are a seemingly endless number of unique processing units being developed, it is innovative technologies such as the GroqChip LPU that will help accelerate the AI needs of the future.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

SC23 Recap: IBM

SC23 Recap: VAST

SC23 Recap: Arcitecta

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

Related Insights
Yann LeCun’s AMI Raises $1BN Seed Round - Is the World Model Era Finally Here
March 13, 2026

Yann LeCun’s AMI Raises $1BN Seed Round – Is the World Model Era Finally Here?

Nick Patience, VP & AI Platforms Practice Lead at Futurum, examines AMI Labs' $1.03B seed round - Europe's largest - and what it means for the world model era, sovereign...
Domo Q4 FY 2026 Earnings Show Record Billings And Profitability Gains
March 13, 2026

Domo Q4 FY 2026 Earnings Show Record Billings And Profitability Gains

Brad Shimmin, Vice President & Practice Lead Futurum, analyzes Domo’s Q4 FY 2026 results, focusing on record billings, improving retention, and AI-led workflow automation strategy as the company pushes consumption...
Nebius Designs the Agentic Era of AI Cloud Platforms with NVIDIA Investment
March 13, 2026

Nebius Designs the Agentic Era of AI Cloud Platforms with NVIDIA Investment

Brendan Burke, Research Director at Futurum, examines NVIDIA’s $2 billion investment in Nebius and its implications for AI cloud infrastructure, including the push toward AI factories, inference platforms, and large-scale...
Synopsys Converge – Is the New Synopsys Ready to Own Multi-Physics Design
March 13, 2026

Synopsys Converge – Is the New Synopsys Ready to Own Multi-Physics Design?

Brendan Burke, Research Director at Futurum, shares his insights on Synopsys Converge 2026, where the "New Synopsys" unveiled Multiphysics-Fusion, AgentEngineer, and Ansys 2026 R1, among other announcements aimed at the...
Oracle Q3 FY 2026 Earnings Driven by OCI AI Infrastructure Demand
March 13, 2026

Oracle Q3 FY 2026 Earnings Driven by OCI AI Infrastructure Demand

Futurum Research analyzes Oracle’s Q3 FY 2026 earnings, focusing on OCI AI infrastructure momentum, sovereign cloud positioning, and Fusion’s embedded AI agents as the company scales capacity and backlog....
SkyWater's CEO Letter Redefines the US Foundry Model
March 12, 2026

SkyWater’s CEO Letter Redefines the US Foundry Model

Brendan Burke, Research Director at Futurum, examines SkyWater’s CEO letter and how it reframes mature nodes and advanced packaging as strategic infrastructure for quantum-era manufacturing....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.