Menu

NVM Express Adds Computational Storage Feature

NVM Express Adds Computational Storage Feature

The News: The NVM Express (NVMe) consortium recently announced a new addition to its specifications by adding a new computational storage feature. The new feature creates a vendor-neutral framework for using computational storage devices. More information can be found in the NVM Express press release here.

NVM Express Adds Computational Storage Feature

Analyst Take: NVMe, the consortium that oversees the NVMe specifications for SSDs, announced a new feature for computational storage. The feature adds new command sets and looks to set a standardized framework for connecting applications to computational storage devices.

Computational storage devices are, as the name strongly suggests, storage devices that contain built-in computational capabilities. While typically the term computational storage applies to devices with directly attached processors, in some cases, it may more generally apply to off-device accelerators that achieve similar outcomes. The overall idea behind computational storage is offloading specific computations from central processors to the storage devices themselves. This method can provide greater efficiency by reducing CPU workloads and by removing the overhead of data movement. Typically, computational storage devices have been used for applications such as encryption or data reduction. While the technology offers a compelling benefit in removing data movement bottlenecks, the overall adoption of the technology has been relatively slow.

The new NVMe computational storage feature looks to provide a vendor-neutral framework for connecting computational devices to applications. NVMe Computational Storage builds upon SNIA’s previously released Computational Storage Architecture and Programming Model with a specific focus on the NVMe specification. Included in the specification are two new command sets:

  • The Computational Programs Command Set: Manages computational programs on the device and includes commands for loading, activating, and executing programs as well as creating and deleting memory ranges.
  • The Subsystem Local Command Set: Supports access of memory in an NVMe subsystem via computational programs and NVMe transport. Commands include memory read, write, and copy commands.

NVMe’s addition of its computational storage feature puts a light back on a technology that seems to have struggled with adoption. While computational storage’s big selling point is reduction of data movement to increase performance, it faces competition from other approaches such as DPUs and other accelerator cards that solve similar problems. The new NVMe specification will help simplify and standardize utilization of computational storage devices, which may help boost adoption.

The new feature may also broaden the use cases of computational storage devices. While the typical use of these devices can certainly offer benefits and increase system performance, the typical program set has been fairly narrow, focusing on areas such as encryption, data reduction, or erasure coding. This may be in part by the computational power of the embedded devices, as well as a lack of defined and available programs. The new NVMe Computational Storage feature includes support to download programs. This ability may help broaden the applicability of such devices, especially as more computationally powerful devices continue to be developed.

The new NVMe Computational Storage feature is a positive development toward increased adoption of computational storage and ultimately removing data movement bottlenecks. One area in particular that may benefit heavily from broader use of computational storage in the future is the edge. The impact of this NVMe Computational Storage feature in boosting real adoption of the technology remains to be seen, but it will be an interesting area to keep an eye on.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Seagate Announces Mozaic 3+ Drive Platform

Hammerspace Adds Tape Support for Global Data Environment

Weka Achieves NVIDIA DGX BasePod Certification

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

Related Insights
Snowflake's SnowWork Targets the Gap Between Data Insight and Business Action
March 25, 2026

Snowflake’s SnowWork Targets the Gap Between Data Insight and Business Action

Brad Shimmin and Nick Patience explore Snowflake’s Project SnowWork and how the Agentic Enterprise Control Plane turns the AI Data Cloud into a "system of action" for autonomous workflows across...
Mistral Forge Takes Aim at RAG. But Who Actually Needs Custom Models
March 25, 2026

Mistral Forge Takes Aim at RAG. But Who Actually Needs Custom Models?

Nick Patience, AI Platforms Practice Lead at Futurum, examines Mistral Forge, a custom enterprise AI model training platform, and argues that while its approach is sound, the addressable market may...
Oracle Positions AI Database 26ai to Lead $1.2 Trillion Market by Bridging the Agentic Reasoning Gap
March 25, 2026

Oracle Positions AI Database 26ai to Lead $1.2 Trillion Market by Bridging the Agentic Reasoning Gap

Brad Shimmin and Keith Kirkpatrick of Futurum explore Oracle's pivot to agentic plumbing. Oracle is embedding autonomous reasoning directly into Oracle AI Database 26ai to solve the enterprise data latency...
Grounding the Agentic Mandate As the Semantic Layer Market Eyes 19% Growth, Microsoft Fabric IQ Targets Leaders Prioritizing AI Investment
March 20, 2026

Grounding the Agentic Mandate: As the Semantic Layer Market Eyes 19% Growth, Microsoft Fabric IQ Targets Leaders Prioritizing AI Investment

Brad Shimmin, VP and Practice Lead at Futurum, shares insights from FabCon and SQLCon 2026 on how Microsoft is leveraging the new Database Hub and Fabric IQ to unify transactional...
NVIDIA GTC 2026 Day 1 - Can NVIDIA’s Ecosystem Accelerate the Inference Inflection
March 18, 2026

NVIDIA GTC 2026 Day 1 – Can NVIDIA’s Ecosystem Accelerate the Inference Inflection?

Brendan Burke, Research Director at Futurum, breaks down NVIDIA GTC 2026 Day 1, highlighting the NVIDIA Vera Rubin platform, the $27B Nebius-Meta deal, and how partners like HPE and Micron...
NVIDIA Agent Toolkit
March 16, 2026

At GTC 2026, NVIDIA Stakes Its Claim on Autonomous Agent Infrastructure

Nick Patience and Mitch Ashley, analysts at Futurum, examine NVIDIA's Agent Toolkit announcements at GTC 2026, covering NemoClaw, AI-Q, the Nemotron Coalition, and what they mean for enterprise agentic AI...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.