SC23 Recap: Groq

SC23 Recap: Groq

The News: Groq attended SC23 showcasing its language processing unit (LPU) and its recent large language model (LLM) performance record of 300 tokens per second per user. Learn more about the Groq LPU on the company website.

SC23 Recap: Groq

Analyst Take: In today’s computing landscape, it seems as if there is a new unique processing unit for everything. Beyond the standard CPU, and the increasingly common graphics processing unit (GPU), there are accelerated processing units (APUs), data processing units (DPUs), tensor processing units (TPUs), and more – almost an endless list of PUs.

But one of the more intriguing offerings is the LPU developed by Groq. Dubbed the GroqChip, the Groq LPU is designed specifically for acceleration and precision in computationally intensive AI inferencing applications such as LLMs. The GroqChip reduces both memory and compute bottlenecks to help language models accelerate the computation of each word and rapidly generate AI output.

At SC23, the company demonstrated just how fast Groq’s LPU could accelerate LLMs – and it certainly was fast. But my anecdotal experience is not the only proof of Groq’s impressive performance. Shortly before SC23, Groq announced a new AI performance record of 300 tokens per second per user. The test was achieved running Meta’s Llama-2 70B LLM on Groq’s LPU.

Along with impressive hardware performance, Groq’s LPU is accompanied by a robust software stack to support its developers. Groq’s software includes a Groq compiler for out-of-the-box support of standard deep learning models, Groq application programming interface (API) for more fine-grained support for custom applications, and profiling tools to visualize the chip’s usage and estimate performance.

Groq’s presence at SC23 was boosted by another factor. While neither hardware nor software related, or technology related at all, I would be remiss not to mention the live llama that Groq paraded around downtown Denver. An ode to the Llama-2 LLM that which Groq utilized to showcase its record-breaking performance, the llama was a great display and certainly made Groq a memorable exhibitor at SC23.

SC23 Recap: Groq
Groq’s Llama Display (Image Source: Groq)

As AI and LLMs continue to develop, so will the requirements for performance, accuracy, and scalability. While there are a seemingly endless number of unique processing units being developed, it is innovative technologies such as the GroqChip LPU that will help accelerate the AI needs of the future.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

SC23 Recap: IBM

SC23 Recap: VAST

SC23 Recap: Arcitecta

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

SHARE:

Latest Insights:

New Tools Streamline ERP Tasks, Add Carbon Tracking, and Enhance Predictive Business Insights
Keith Kirkpatrick, Research Director at Futurum, provides his perspective on the news from Epicor Insights 2025, including agentic AI to streamline ERP workflows, carbon tracking in Kinetic, and expansion of predictive insights with Grow AI.
Transformation Initiatives Drive Profitability as Company Posts Revenue Growth
Fernando Montenegro, VP and Practice Lead at Futurum, reviews Kyndryl's Q4 FY2025 earnings. Key highlights: Constant-currency growth, notable rise in pretax income, how 'three-A' initiatives drive results, and strategic tailwinds.
Q1 FY 2025 Results Reflect Resilience in Gross Margin and Record Design Wins in AI, Robotics, and Automotive as New Products Scale
Olivier Blanchard, Research Director at Futurum, examines Lattice’s Q1 FY 2025 earnings, highlighting record design wins across AI, robotics, and automotive, and how new products are paving the way for growth in FY 2026.
OpenAI Is Positioned as a Major AI-Powered Software Development Company, Competing With Microsoft, GitHub, Anthropic, and Startup Cursor
Analysts Mitch Ashley, VP of DevOps and Application Development, and Nick Patience, VP of AI Software and Tools, at Futurum, share their insights on the implications of OpenAI’s agreement to acquire AI coding tool company Windsurf. The acquisition propels OpenAI forward in its quest for leadership in the AI coding and agent development market.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.