Menu

Marvell Expands Custom AI Portfolio With Launch of 2nm Custom SRAM

Marvell Expands Custom AI Portfolio with Launch of 2nm Custom SRAM

Analyst(s): Ray Wang
Publication Date: June 23, 2025

Marvell has introduced the industry’s first 2nm custom SRAM, engineered to support custom XPUs powering AI data centers. The memory innovation delivers improved performance per mm² while reducing power and die area demands.

What is Covered in this Article:

  • Marvell introduced the industry’s first 2nm custom SRAM, targeting AI infrastructure and custom XPU designs
  • The new memory delivers up to 6 gigabits and achieves the industry’s highest bandwidth per mm²
  • SRAM enables up to 15% die area recovery and up to 66% lower standby power
  • Marvell aims to expand its presence in the $55.4 billion custom compute TAM in CY 2028, with 18 active sockets, >10 customers, and >50 opportunities in the pipeline
  • The company’s broader strategy focuses on custom silicon offerings for performance, power, and cost optimization

The News: Marvell Technology (NASDAQ: MRVL) has rolled out the industry’s first 2nm custom Static Random Access Memory (SRAM), built to boost the performance of custom XPUs and AI-driven systems in cloud data centers. The SRAM combines Marvell’s own circuitry and software with core SRAM design and cutting-edge 2nm process tech.

This new memory delivers up to 6 gigabits of fast storage and uses up to 66% less standby power than standard on-chip SRAM. It can also help recover up to 15% of chip die space in 2nm designs, giving developers more room to optimize for compute power, size, or cost.

Marvell Expands Custom AI Portfolio With Launch of 2nm Custom SRAM

Analyst Take: Marvell’s 2nm custom SRAM marks a step in its push to improve AI hardware through custom infrastructure. With boosts in both performance and efficiency, this move supports Marvell’s ongoing effort to refine memory layouts for high-performance compute systems.

Greater Memory Density in a Smaller Footprint

This custom SRAM lets chipmakers reclaim up to 15% of die space in 2nm designs, making room for other cores or critical features. It offers top bandwidth per square millimeter, which is crucial for AI due to the need for faster data in more compact spaces. Compared to other dense memory setups, Marvell’s SRAM delivers 17x more bandwidth density while using half the area for the same throughput. This opens the door to smaller, more powerful chips that don’t sacrifice speed or performance. It’s a strong base for building XPUs tuned to specific workloads.

Power Efficiency at High Operating Frequency

Built to run at speeds up to 3.75 GHz, the 2nm SRAM uses up to 66% less standby power than typical on-chip memory at similar sizes. That combo of speed and efficiency is ideal for power-hungry AI clusters where every watt counts. As energy limits and thermal continues to be a key challenge for data center buildouts, savings like these help reduce heat and improve overall system efficiency. That’s also a plus for customers looking to cut long-term operating costs, since lower idle power adds up across massive deployments. Marvell’s efficient design helps balance power and performance in custom setups.

Addressing a $55.4 billion Market Opportunity

Marvell’s new SRAM supports its growing footprint in the $55.4 billion custom compute market in CY 2028, which includes $40.8 billion from custom XPUs and $14.6 billion from related components. The attach side is growing at 90% annually, while XPUs are rising at 47% CAGR through 2028. Marvell already has 18 design wins with major and emerging hyperscalers, aiming to grab 20% of this market over time. With over 50 potential projects in the pipeline totaling $75 billion+ in value, this SRAM will be a key piece of many upcoming deals. Its flexibility and strong performance make it a competitive edge in new bids.

Part of a Broader Custom Silicon Playbook

This SRAM launch builds on Marvell’s earlier work in HBM and CXL tech – both aimed at boosting bandwidth and system integration. The new memory fits into a broader stack that includes electrical and optical serializer/deserializers (SerDes), chip-to-chip links, and advanced packaging technologies, all designed with specific customer needs in mind. Tying SRAM into this mix ensures a common design approach and smoother integration. Marvell is creating a full set of IP designed to lift system performance across compute and memory. With SRAM now added, the company has another strong card in its next-gen custom silicon lineup.

What to Watch:

  • The scalability and adoption rate of Marvell’s 2nm SRAM across existing custom sockets
  • Design wins involving the new SRAM from top hyperscalers and emerging AI players
  • Execution against Marvell’s 20% market share target in the $55.4 billion custom compute TAM
  • Integration of SRAM with other Marvell IPs like HBM, SerDes, and co-packaged components

See the complete press release on Marvell’s launch of the industry’s first 2nm custom SRAM on the Marvell website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Marvell Q1 FY 2026 Results Driven by Custom Silicon and Data Center Momentum

Marvell Debuts Custom HBM Compute Architecture

OFC 2025: Marvell Interconnecting the AI Era

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
Is Tesla’s Multi-Foundry Strategy the Blueprint for Record AI Chip Volumes
January 22, 2026

Is Tesla’s Multi-Foundry Strategy the Blueprint for Record AI Chip Volumes?

Brendan Burke, Research Director at Futurum, explores how Tesla’s dual-foundry strategy for its AI5 chip enables record production scale and could make multi-foundry production the new standard for AI silicon....
Synopsys and GlobalFoundries Reshape Physical AI Through Processor IP Unbundling
January 16, 2026

Synopsys and GlobalFoundries Reshape Physical AI Through Processor IP Unbundling

Brendan Burke, Research Director at Futurum, evaluates GlobalFoundries’ acquisition of Synopsys’ Processor IP to lead in specialized silicon for Physical AI. Synopsys pivots to a neutral ecosystem strategy, prioritizing foundation...
Qualcomm Unveils Future of Intelligence at CES 2026, Pushes the Boundaries of On-Device AI
January 16, 2026

Qualcomm Unveils Future of Intelligence at CES 2026, Pushes the Boundaries of On-Device AI

Olivier Blanchard, Research Director at Futurum, shares his/her insights on Qualcomm’s CES 2026 announcements, which highlight both the breadth of Qualcomm’s Snapdragon and Dragonwing portfolios, and the velocity with which...
TSMC Q4 FY 2025 Results and FY 2026 Outlook Signal AI-Led Growth
January 16, 2026

TSMC Q4 FY 2025 Results and FY 2026 Outlook Signal AI-Led Growth

Futurum Research analyzes TSMC’s Q4 FY 2025 update, highlighting AI-led demand, advanced-node mix, tight capacity, and a higher FY 2026 capex plan to scale N2 and advanced packaging while sustaining...
SiFive and NVIDIA Rewriting the Rules of AI Data Center Design
January 15, 2026

SiFive and NVIDIA: Rewriting the Rules of AI Data Center Design

Brendan Burke, Research Director at Futurum, analyzes the groundbreaking integration of NVIDIA NVLink Fusion into SiFive’s RISC-V IP, a move that signals the end of the proprietary CPU’s stranglehold on...
Will QAI Moon Beat Hyperscalers in GPU Latency
January 15, 2026

Will QAI Moon Beat Hyperscalers in GPU Latency?

The need for edge AI inference is being met by QAI Moon, a new joint venture formed by Moonshot Energy, QumulusAI, and IXP.us to pair carrier-neutral internet exchange points with...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.