Menu

Will SiFive’s New RISC-V IP Drive Adoption in Edge AI and Generative AI?

Will SiFive's 2nd Generation Intelligence Family Drive AI Adoption?

Analyst(s): Ray Wang
Publication Date: September 12, 2025

SiFive introduced its 2nd Generation Intelligence family with five RISC-V IP products combining scalar, vector, and matrix compute to target AI workloads from IoT to data centers. The launch positions SiFive to broaden adoption across edge and enterprise applications.

What is Covered in this Article:

  • Launch of five new RISC-V AI IP products: X160 Gen 2, X180 Gen 2, X280 Gen 2, X390 Gen 2, and XM Gen 2.
  • Features include scalar, vector, and matrix compute, accelerator control interfaces, and memory latency tolerance technology.
  • Target applications span IoT, robotics, automotive, industrial automation, wearables, mobile, and generative AI data center workloads.
  • Licensing is open immediately, with first silicon expected in Q2 2026.
  • Competitive positioning against incumbents like Arm, Intel, and NVIDIA.

The News: SiFive has rolled out its 2nd Generation Intelligence lineup of RISC-V processor IP, featuring five new cores aimed at boosting AI workloads in various settings. The launch includes two new chips, the X160 Gen 2 and X180 Gen 2, along with refreshed versions of the X280, X390, and XM Gen 2. These new offerings bring improved scalar and vector performance, with the XM core adding matrix compute to handle LLMs and data center AI tasks. They are ready for licensing now, with the first silicon expected by Q2 FY 2026. The chips are built for everything from edge IoT devices to robotics, automotive systems, industrial automation, and large-scale generative AI applications.

Will SiFive’s New RISC-V IP Drive Adoption in Edge AI and Generative AI?

Analyst Take: SiFive’s move to grow its Intelligence family shows its ongoing push to bring RISC-V into AI across the board. By blending new lower-tier cores with more advanced upgrades, they are looking to connect limited IoT hardware with heavy-duty generative AI systems. The mix of scalar, vector, and matrix compute options – along with new accelerator interfaces and memory features – gives the lineup both range and adaptability. Still, with no silicon until mid-2026, how well SiFive executes and how the ecosystem responds will decide if it can turn early IP strength into long-term market traction.

Broadened Product Portfolio for AI Coverage

The X160 and X180 Gen 2 are aimed at IoT and edge use cases, while the upgraded X280, X390, and XM push into mobile, industrial, and data center AI. These chips target everything from wearables and robotics to large-scale LLM models doing trillions of operations per second. With this broad coverage, SiFive sets itself apart as a RISC-V supplier offering scalable IP across 32-bit to matrix compute engines. That wide range helps position them as a key provider for AI workloads on RISC-V.

Vector and Matrix Compute Capabilities

SiFive’s processors use vector engines to process data in parallel, cutting overhead and saving power. The XM Gen 2 steps it up with matrix compute for handling LLMs and other heavy AI tasks in data centers. Vector lengths range from 128-bit in the X160/X180 up to 1024-bit in the X390, giving a major boost in flexibility for AI inference and training. Compared to CPUs that only use scalar processing, this vector-focused approach gives SiFive a solid performance-per-watt edge. The integration of vector and matrix support raises SiFive’s performance at both the edge and in the cloud.

Interfaces for Accelerator Control

With new SSCI and VCIX interfaces, SiFive’s X-series cores can act as Accelerator Control Units. That means they can hook into custom AI accelerators more directly and avoid relying on proprietary connections. Developers also get direct access to registers and vectors, which cuts latency and simplifies the software stack. This setup makes the platform more flexible for building mixed AI systems across different industries. As a result, these interfaces help position SiFive as a customizable and scalable AI platform.

Efficiency and Software Ecosystem Support

The processors come with memory latency handling and support for data types like BF16, FP4, and FP8 – key for both training and inference. According to benchmarks, the X160 Gen 2 delivers double the performance of similar chips in small-scale tasks like voice recognition and image tagging. On the software side, SiFive works with TensorFlow Lite, ONNX Runtime, and Llama.cpp and provides extra compiler tools and libraries. This makes it easier for developers to run workloads across various hardware setups. Hence, strong software support and solid efficiency are likely to drive SiFive’s adoption in both edge and enterprise environments.

What to Watch:

  • Whether SiFive can secure broader Tier 1 semiconductor adoption beyond the two confirmed U.S. customers.
  • How XM Gen 2’s scalability for LLMs translates into competitive performance against incumbent AI chips.
  • Industry response to SiFive’s accelerator control interfaces and adoption across automotive and industrial automation.
  • Execution on the first silicon release in Q2 2026 and validation of claimed efficiency and benchmark gains.
  • SiFive’s ability to compete against established vendors with robust software ecosystems like Arm, Intel, and NVIDIA.

See the complete press release on the launch of SiFive’s 2nd Generation Intelligence family on the SiFive website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Could NVIDIA’s Collaboration with MediaTek Trigger a $73 Billion Acquisition Bid?

Qualcomm Debuts First Processor With Fully Integrated RFID Functionality

New SiFive Performance P870-D Complements Intelligence Chipset for AI

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
CIO Take Smartsheet's Intelligent Work Management as a Strategic Execution Platform
December 22, 2025

CIO Take: Smartsheet’s Intelligent Work Management as a Strategic Execution Platform

Dion Hinchcliffe analyzes Smartsheet’s Intelligent Work Management announcements from a CIO lens—what’s real about agentic AI for execution at scale, what’s risky, and what to validate before standardizing....
Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth
December 22, 2025

Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth?

Keith Kirkpatrick, Research Director with Futurum, shares his insights on Zoho’s latest finance-focused releases, Zoho Spend and Zoho Billing Enterprise Edition, further underscoring Zoho’s drive to illustrate its enterprise-focused capabilities....
Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook
December 18, 2025

Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook

Futurum Research analyzes Micron’s Q1 FY 2026, focusing on AI-led demand, HBM commitments, and a pulled-forward capacity roadmap, with guidance signaling continued strength into FY 2026 amid persistent industry supply...
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Will a Digital Adoption Platform Become a Must-Have App in 2026?
December 15, 2025

Will a DAP Become the Must-Have Software App in 2026?

Keith Kirkpatrick, Research Director with Futurum, covers WalkMe’s 2025 Analyst Day, and discusses the company’s key pillars for driving success with enterprise software in an AI- and agentic-dominated world heading...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.