Menu

Tenstorrent Ready to Storm AI Chip Market

Tenstorrent's Funding Include Bezos Expeditions, Samsung, LG

Analyst(s): Ron Westfall
Publication Date: December 16, 2024

What is Covered in this Article:

  • Tenstorrent is positioning itself as a credible alternative to NVIDIA in the competitive AI chip market.
  • Tenstorrent intends to increase its engineering workforce following a recent funding round.
  • Tenstorrent processors can demonstrate consistent competitive differentiation against GPUs.
  • Tenstorrent Ready to Storm AI Chip Market

The News: AI chip startup Tenstorrent has secured $693 million in funding as part of an investment round valuing the company at $2.6 billion.

Tenstorrent Ready to Storm AI Chip Market

Analyst Take: Tenstorrent is intent on storming the AI chipset market and posing a long-term challenge to dominant market leader NVIDIA as well as all comers. To fuel its ambitious strategy, Tenstorrent has financial backing from Jeff Bezos’s investment firm, Bezos Expeditions, taking part in a $693 million Series D funding round for Tenstorrent. This was co-led by Samsung Securities and AFW Partners, and backed by LG Electronics and Fidelity, valuing the company at over $2.6 billion.

Founded in 2016, Tenstorrent develops scalable AI accelerators for both cloud and edge computing to NVIDIA’s GPUs. The company is also in the process of creating a RISC-V CPU and licensing its designs to other entities. Moreover, the company notably leverages open-source technology in its development process, allowing it to avoid the costly high-bandwidth memory (HBM) currently used by NVIDIA.

Tenstorrent’s initial chips released to the market were produced through a partnership with GlobalFoundries. Tenstorrent’s Tensix Processors comprise processor cores called Tensix Cores. Each Tensix Core includes an array math unit for tensor operations, a SIMD unit for vector operations, a Network-on-Chip (NoC) to move data from core-to-core and chip-to-chip, five baby RISC-V processors to help direct the NoC, and up to 1.5MB of SRAM.

The introduction of Grayskull, Tenstorrent’s RISC-V alternative, targets simplifying programming and scaling significantly. Grayskull is the company’s first Tensix Processor, featuring up to 120 Tensix Cores with 1MB of SRAM each, supporting 8GB of LPDDR4 memory on a 256-bit bus, and incorporating support for both common AI precision formats (FP8, FP16, BF16) and memory-optimized precision formats (BFP2, BFP4, BFP8).

In addition, Wormhole is a die-shrink and revision of Grayskull. The Tensix Core count is slightly reduced (up to 80), but the Tensix Cores themselves have had their SRAM increased to 1.5MB, support for additional precision formats was added (FP32 output, INT8, INT32 output, and TF32), and overall performance and efficiency of existing formats was increased, offsetting the reduced core count. In addition, local memory was increased to 12GB of faster GDDR6, and Wormhole can scale to multi-chip implementations.

Tenstorrent’s HBM perspective also parallels Marvell’s recent announcement that it has developed a new custom HBM compute architecture that can enable XPUs to achieve greater compute and memory density. Marvell’s new HBM compute architecture technology is available to all of its custom silicon customers to improve the performance, efficiency, and total cost of ownership (TCO) of their custom XPUs. Marvell is collaborating with its cloud customers and HBM manufacturers, Micron, Samsung Electronics, and SK Hynix, to define and develop custom HBM solutions for next-generation XPUs.

Tenstorrent Aids AWS Goal of Avoiding NVIDIA Over Reliance

This strategic approach aligns with Amazon’s interest in diversifying its AI infrastructure and reducing dependency on NVIDIA for scaling AWS AI workload demands. Moreover, it contrasts with NVIDIA’s proprietary ecosystem and aligns with Amazon’s broader goals of cultivating open-source scalable and flexible AI solutions.

Tenstorrent processors feature a grid-based architecture composed of Tensix Cores that are designed to efficiently handle tensor computations of various sizes. Each processor is equipped with integrated network communication hardware, enabling direct inter-processor communication over networks without relying on DRAM.

The company unveiled that it would use the funding to develop open-source AI software stacks, recruit additional developers, enhance its global development and design centers, and create systems and cloud solutions for AI developers. Tenstorrent’s CEO, Jim Keller, announced that the company has secured customer contracts totaling nearly $150 million and intends to launch a new AI processor every two years.

AI Chipset Competition Thickens

In addition to NVIDIA, Tenstorrent faces a growing field of AI chipset rivals. AMD and Intel already offer AI chipsets. New entrants, such as Cerebras Systems, Graphcore, Groq, Blaize, NeuReality, and Ampere, are all vying for mind share and more AI ecosystem influence as AI chip buyers investigate their best alternatives to NVIDIA’s proposition.

I find that Tenstorrent and other AI chipset vendors have their work cut out as NVIDIA offers a comprehensive AI platform designed for enterprise-level generative AI applications. The company’s full-stack AI solutions encompass both hardware and software components. This includes NVIDIA’s specialization in GPUs, which are particularly well suited for AI tasks due to their ability to perform parallel computations efficiently. Also, NVIDIA provides AI-powered workstations for workforces to address challenging workflows and boost innovation.

On the software side, NVIDIA’s CUDA is a parallel computing platform and programming model that enables developers to use NVIDIA GPUs for general-purpose processing. NVIDIA’s AI software suite provides a range of software tools and libraries optimized for AI development and deployment. NVIDIA provides solutions across accelerated infrastructure, enterprise-grade software, and AI models. As a result, NVIDIA’s AI solutions have positioned the company as a dominant player in the AI infrastructure space, with their GPUs being the ecosystem-dominant engine training AI models.

NVIDIA’s competition is coming in at different angles to win new AI XPU business including hardware-specific differentiation such as Tenstorrent’s spotlighting HBM factors. However, directly challenging NVIDIA’s holistic platform approach will prove more challenging in the near term.

Looking Ahead

Overall, I believe Tenstorrent processors can demonstrate consistent competitive differentiation against GPUs by providing enhanced flexibility in programming, greater ability to scale, and advantageous handling of dynamic sparsity and conditional operations during execution. The company’s architecture can enable more efficient adaptation to varying workloads and fine-grained control over computations, resulting in improved performance and power efficiency.

Tenstorrent’s future chip series will be developed through a collaboration between TSMC and Samsung. This partnership includes the creation of a 2nm AI accelerator, although specific release dates have not yet been finalized. As such, Tenstorrent innovative chip design is positioned to ensure scaling across multiple devices without significant software overhead, presenting a potentially more agile solution for large-scale AI applications.

See the complete TechRadar article on the TechRadar site.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Aramco Digital and Groq Build the World’s Largest AI Inferencing Data Center

Talking AMD, NVIDIA & MediaTek, Apple, Amazon, Tesla, Commvault

Lenovo Tech World 2024: Lenovo Unleashes Hybrid AI Advantage with NVIDIA

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

Related Insights
Will Meta’s Customization of AMD GPUs Empower Personal Agents
February 26, 2026

Will Meta’s Customization of AMD GPUs Empower Personal Agents?

Brendan Burke, Research Director at Futurum, analyzes Meta's 6-gigawatt AMD deal, its custom MI450 inference GPU, performance-based equity warrant, and what it means for GPU duopoly economics....
Intel Bets on Agentic AI Economics with SambaNova Partnership
February 26, 2026

Intel Bets on Agentic AI Economics with SambaNova Partnership

Brendan Burke, Research Director at Futurum, explores how Intel and SambaNova are disrupting the AI inference market with specialized, power-efficient inference and low-latency logic engines designed for the next era...
The Storage Era is Dead; Long Live Everpure!
February 25, 2026

Storage Evolved: Everpure Takes on Data Challenges for an AI World

Brad Shimmin, VP and Practice Lead at Futurum, shares his insights on Pure Storage’s rebrand to Everpure as well as its supportive acquisition of 1touch.io, exploring why dropping "Storage" is...
Five9 Q4 FY 2025 Earnings Revenue Beat, AI Momentum, Cash Flow High
February 25, 2026

Five9 Q4 FY 2025 Earnings: Revenue Beat, AI Momentum, Cash Flow High

Keith Kirkpatrick, VP & Research Director, Enterprise Software & Digital Workflows at Futurum, notes Five9’s Q4 FY 2025 AI momentum and record bookings signal strong H2 FY 2026 growth....
Amazon Ads MCP Server Debuts, Streamlining AI-Managed Campaign Execution
February 24, 2026

Amazon Ads MCP Server Debuts, Streamlining AI-Managed Campaign Execution

Futurum Research examines the Amazon Ads MCP Server and how AI-managed workflows streamline ad execution while redefining the role of human oversight in Amazon advertising....
Analog Devices Q1 FY 2026 Broad-Based Recovery with AI Data Center Upside
February 20, 2026

Analog Devices Q1 FY 2026: Broad-Based Recovery with AI Data Center Upside

Brendan Burke, Research Director at Futurum, analyzes Analog Devices’ Q1 FY 2026 earnings, highlighting Industrial and Communications momentum, AI data center power/optics growth, pricing cadence, and a stronger second-half setup....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.