Will SiFive’s New RISC-V IP Drive Adoption in Edge AI and Generative AI?

Will SiFive's 2nd Generation Intelligence Family Drive AI Adoption?

Analyst(s): Ray Wang
Publication Date: September 12, 2025

SiFive introduced its 2nd Generation Intelligence family with five RISC-V IP products combining scalar, vector, and matrix compute to target AI workloads from IoT to data centers. The launch positions SiFive to broaden adoption across edge and enterprise applications.

What is Covered in this Article:

  • Launch of five new RISC-V AI IP products: X160 Gen 2, X180 Gen 2, X280 Gen 2, X390 Gen 2, and XM Gen 2.
  • Features include scalar, vector, and matrix compute, accelerator control interfaces, and memory latency tolerance technology.
  • Target applications span IoT, robotics, automotive, industrial automation, wearables, mobile, and generative AI data center workloads.
  • Licensing is open immediately, with first silicon expected in Q2 2026.
  • Competitive positioning against incumbents like Arm, Intel, and NVIDIA.

The News: SiFive has rolled out its 2nd Generation Intelligence lineup of RISC-V processor IP, featuring five new cores aimed at boosting AI workloads in various settings. The launch includes two new chips, the X160 Gen 2 and X180 Gen 2, along with refreshed versions of the X280, X390, and XM Gen 2. These new offerings bring improved scalar and vector performance, with the XM core adding matrix compute to handle LLMs and data center AI tasks. They are ready for licensing now, with the first silicon expected by Q2 FY 2026. The chips are built for everything from edge IoT devices to robotics, automotive systems, industrial automation, and large-scale generative AI applications.

Will SiFive’s New RISC-V IP Drive Adoption in Edge AI and Generative AI?

Analyst Take: SiFive’s move to grow its Intelligence family shows its ongoing push to bring RISC-V into AI across the board. By blending new lower-tier cores with more advanced upgrades, they are looking to connect limited IoT hardware with heavy-duty generative AI systems. The mix of scalar, vector, and matrix compute options – along with new accelerator interfaces and memory features – gives the lineup both range and adaptability. Still, with no silicon until mid-2026, how well SiFive executes and how the ecosystem responds will decide if it can turn early IP strength into long-term market traction.

Broadened Product Portfolio for AI Coverage

The X160 and X180 Gen 2 are aimed at IoT and edge use cases, while the upgraded X280, X390, and XM push into mobile, industrial, and data center AI. These chips target everything from wearables and robotics to large-scale LLM models doing trillions of operations per second. With this broad coverage, SiFive sets itself apart as a RISC-V supplier offering scalable IP across 32-bit to matrix compute engines. That wide range helps position them as a key provider for AI workloads on RISC-V.

Vector and Matrix Compute Capabilities

SiFive’s processors use vector engines to process data in parallel, cutting overhead and saving power. The XM Gen 2 steps it up with matrix compute for handling LLMs and other heavy AI tasks in data centers. Vector lengths range from 128-bit in the X160/X180 up to 1024-bit in the X390, giving a major boost in flexibility for AI inference and training. Compared to CPUs that only use scalar processing, this vector-focused approach gives SiFive a solid performance-per-watt edge. The integration of vector and matrix support raises SiFive’s performance at both the edge and in the cloud.

Interfaces for Accelerator Control

With new SSCI and VCIX interfaces, SiFive’s X-series cores can act as Accelerator Control Units. That means they can hook into custom AI accelerators more directly and avoid relying on proprietary connections. Developers also get direct access to registers and vectors, which cuts latency and simplifies the software stack. This setup makes the platform more flexible for building mixed AI systems across different industries. As a result, these interfaces help position SiFive as a customizable and scalable AI platform.

Efficiency and Software Ecosystem Support

The processors come with memory latency handling and support for data types like BF16, FP4, and FP8 – key for both training and inference. According to benchmarks, the X160 Gen 2 delivers double the performance of similar chips in small-scale tasks like voice recognition and image tagging. On the software side, SiFive works with TensorFlow Lite, ONNX Runtime, and Llama.cpp and provides extra compiler tools and libraries. This makes it easier for developers to run workloads across various hardware setups. Hence, strong software support and solid efficiency are likely to drive SiFive’s adoption in both edge and enterprise environments.

What to Watch:

  • Whether SiFive can secure broader Tier 1 semiconductor adoption beyond the two confirmed U.S. customers.
  • How XM Gen 2’s scalability for LLMs translates into competitive performance against incumbent AI chips.
  • Industry response to SiFive’s accelerator control interfaces and adoption across automotive and industrial automation.
  • Execution on the first silicon release in Q2 2026 and validation of claimed efficiency and benchmark gains.
  • SiFive’s ability to compete against established vendors with robust software ecosystems like Arm, Intel, and NVIDIA.

See the complete press release on the launch of SiFive’s 2nd Generation Intelligence family on the SiFive website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Could NVIDIA’s Collaboration with MediaTek Trigger a $73 Billion Acquisition Bid?

Qualcomm Debuts First Processor With Fully Integrated RFID Functionality

New SiFive Performance P870-D Complements Intelligence Chipset for AI

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
Contact Center Vendors
April 28, 2026

Will Microsoft’s Unified AI Agents Force Contact Center Vendors to Rethink Their Playbooks?

Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Di at Futurum, analyzes how Microsoft's Dynamics 365 Contact Center is forcing traditional vendors like Genesys and NICE to reimagine...
ABB Q1 FY 2026 Earnings Driven by Data Center and Grid Demand
April 28, 2026

ABB Q1 FY 2026 Earnings Driven by Data Center and Grid Demand

Olivier Blanchard, Research Director & Practice Lead, Intelligent Devices at The Futurum Group, analyzes ABB’s Q1 FY 2026 earnings, focusing on electrification demand tied to data centers and grid upgrades....
IBM Q1 FY 2026 Earnings Show Software Growth and Mainframe AI Monetization
April 28, 2026

IBM Q1 FY 2026 Earnings Show Software Growth and Mainframe AI Monetization

Futurum Research reviews IBM Q1 FY 2026 earnings, focusing on software mix durability, Confluent-driven data streaming strategy, and mainframe AI inferencing as IBM maintains full-year growth and cash flow expectations....
Can Agentic ITOps Transform IT Incident Management or Will Complexity Stall Progress?
April 28, 2026

Can Agentic ITOps Transform IT Incident Management or Will Complexity Stall Progress?

AI-powered ITOps platforms automate incident detection and remediation, cutting costs from $14,000+ per minute downtime, yet integration challenges and security concerns hinder enterprise adoption....
Can Edwin AI and Catchpoint Finally Deliver True Autonomous IT Without Blind Spots?
April 28, 2026

Can Edwin AI and Catchpoint Finally Deliver True Autonomous IT Without Blind Spots?

LogicMonitor's integration of Edwin AI with Catchpoint aims to deliver true autonomous IT by addressing a critical blind spot: visibility into failures originating beyond enterprise infrastructure, including CDNs, DNS, and...
Can LogicMonitor’s Closed-Loop Automation Finally Deliver on Autonomous IT?
April 28, 2026

Can LogicMonitor’s Closed-Loop Automation Finally Deliver on Autonomous IT?

LogicMonitor's latest update enables closed-loop automation with AI-driven workflows to eliminate manual bottlenecks, reduce resolution time, and simplify IT operations....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.