Analyst(s): Ron Westfall
Publication Date: December 16, 2024
What is Covered in this Article:
- Tenstorrent is positioning itself as a credible alternative to NVIDIA in the competitive AI chip market.
- Tenstorrent intends to increase its engineering workforce following a recent funding round.
- Tenstorrent processors can demonstrate consistent competitive differentiation against GPUs.
- Tenstorrent Ready to Storm AI Chip Market
The News: AI chip startup Tenstorrent has secured $693 million in funding as part of an investment round valuing the company at $2.6 billion.
Tenstorrent Ready to Storm AI Chip Market
Analyst Take: Tenstorrent is intent on storming the AI chipset market and posing a long-term challenge to dominant market leader NVIDIA as well as all comers. To fuel its ambitious strategy, Tenstorrent has financial backing from Jeff Bezos’s investment firm, Bezos Expeditions, taking part in a $693 million Series D funding round for Tenstorrent. This was co-led by Samsung Securities and AFW Partners, and backed by LG Electronics and Fidelity, valuing the company at over $2.6 billion.
Founded in 2016, Tenstorrent develops scalable AI accelerators for both cloud and edge computing to NVIDIA’s GPUs. The company is also in the process of creating a RISC-V CPU and licensing its designs to other entities. Moreover, the company notably leverages open-source technology in its development process, allowing it to avoid the costly high-bandwidth memory (HBM) currently used by NVIDIA.
Tenstorrent’s initial chips released to the market were produced through a partnership with GlobalFoundries. Tenstorrent’s Tensix Processors comprise processor cores called Tensix Cores. Each Tensix Core includes an array math unit for tensor operations, a SIMD unit for vector operations, a Network-on-Chip (NoC) to move data from core-to-core and chip-to-chip, five baby RISC-V processors to help direct the NoC, and up to 1.5MB of SRAM.
The introduction of Grayskull, Tenstorrent’s RISC-V alternative, targets simplifying programming and scaling significantly. Grayskull is the company’s first Tensix Processor, featuring up to 120 Tensix Cores with 1MB of SRAM each, supporting 8GB of LPDDR4 memory on a 256-bit bus, and incorporating support for both common AI precision formats (FP8, FP16, BF16) and memory-optimized precision formats (BFP2, BFP4, BFP8).
In addition, Wormhole is a die-shrink and revision of Grayskull. The Tensix Core count is slightly reduced (up to 80), but the Tensix Cores themselves have had their SRAM increased to 1.5MB, support for additional precision formats was added (FP32 output, INT8, INT32 output, and TF32), and overall performance and efficiency of existing formats was increased, offsetting the reduced core count. In addition, local memory was increased to 12GB of faster GDDR6, and Wormhole can scale to multi-chip implementations.
Tenstorrent’s HBM perspective also parallels Marvell’s recent announcement that it has developed a new custom HBM compute architecture that can enable XPUs to achieve greater compute and memory density. Marvell’s new HBM compute architecture technology is available to all of its custom silicon customers to improve the performance, efficiency, and total cost of ownership (TCO) of their custom XPUs. Marvell is collaborating with its cloud customers and HBM manufacturers, Micron, Samsung Electronics, and SK Hynix, to define and develop custom HBM solutions for next-generation XPUs.
Tenstorrent Aids AWS Goal of Avoiding NVIDIA Over Reliance
This strategic approach aligns with Amazon’s interest in diversifying its AI infrastructure and reducing dependency on NVIDIA for scaling AWS AI workload demands. Moreover, it contrasts with NVIDIA’s proprietary ecosystem and aligns with Amazon’s broader goals of cultivating open-source scalable and flexible AI solutions.
Tenstorrent processors feature a grid-based architecture composed of Tensix Cores that are designed to efficiently handle tensor computations of various sizes. Each processor is equipped with integrated network communication hardware, enabling direct inter-processor communication over networks without relying on DRAM.
The company unveiled that it would use the funding to develop open-source AI software stacks, recruit additional developers, enhance its global development and design centers, and create systems and cloud solutions for AI developers. Tenstorrent’s CEO, Jim Keller, announced that the company has secured customer contracts totaling nearly $150 million and intends to launch a new AI processor every two years.
AI Chipset Competition Thickens
In addition to NVIDIA, Tenstorrent faces a growing field of AI chipset rivals. AMD and Intel already offer AI chipsets. New entrants, such as Cerebras Systems, Graphcore, Groq, Blaize, NeuReality, and Ampere, are all vying for mind share and more AI ecosystem influence as AI chip buyers investigate their best alternatives to NVIDIA’s proposition.
I find that Tenstorrent and other AI chipset vendors have their work cut out as NVIDIA offers a comprehensive AI platform designed for enterprise-level generative AI applications. The company’s full-stack AI solutions encompass both hardware and software components. This includes NVIDIA’s specialization in GPUs, which are particularly well suited for AI tasks due to their ability to perform parallel computations efficiently. Also, NVIDIA provides AI-powered workstations for workforces to address challenging workflows and boost innovation.
On the software side, NVIDIA’s CUDA is a parallel computing platform and programming model that enables developers to use NVIDIA GPUs for general-purpose processing. NVIDIA’s AI software suite provides a range of software tools and libraries optimized for AI development and deployment. NVIDIA provides solutions across accelerated infrastructure, enterprise-grade software, and AI models. As a result, NVIDIA’s AI solutions have positioned the company as a dominant player in the AI infrastructure space, with their GPUs being the ecosystem-dominant engine training AI models.
NVIDIA’s competition is coming in at different angles to win new AI XPU business including hardware-specific differentiation such as Tenstorrent’s spotlighting HBM factors. However, directly challenging NVIDIA’s holistic platform approach will prove more challenging in the near term.
Looking Ahead
Overall, I believe Tenstorrent processors can demonstrate consistent competitive differentiation against GPUs by providing enhanced flexibility in programming, greater ability to scale, and advantageous handling of dynamic sparsity and conditional operations during execution. The company’s architecture can enable more efficient adaptation to varying workloads and fine-grained control over computations, resulting in improved performance and power efficiency.
Tenstorrent’s future chip series will be developed through a collaboration between TSMC and Samsung. This partnership includes the creation of a 2nm AI accelerator, although specific release dates have not yet been finalized. As such, Tenstorrent innovative chip design is positioned to ensure scaling across multiple devices without significant software overhead, presenting a potentially more agile solution for large-scale AI applications.
See the complete TechRadar article on the TechRadar site.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
Aramco Digital and Groq Build the World’s Largest AI Inferencing Data Center
Talking AMD, NVIDIA & MediaTek, Apple, Amazon, Tesla, Commvault
Lenovo Tech World 2024: Lenovo Unleashes Hybrid AI Advantage with NVIDIA
Author Information
Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.
He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.
Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.
Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.