Hammerspace Adds AWS SVP and LLM Training Architecture

Hammerspace Adds AWS SVP and LLM Training Architecture

The News: Global file system vendor Hammerspace named former Amazon Web Services (AWS) general manager Marc Cree as its senior vice president of strategic partnerships. In this role, he will build out Hammerspace’s strategic partner ecosystem through alliances, integration collaboration, and OEM relationships. You can see the announcement on the Hammerspace website.

Hammerspace Adds AWS SVP and LLM Training Architecture

Analyst Take: Cree brings decades of storage, networking, and cloud experience to Hammerspace, which intersects in all those areas. The Hammerspace distributed file system spans data and applications in data centers and public cloud services such as AWS, Google Cloud, Microsoft Azure, and Seagate Lyve Cloud.

At AWS, Cree was the GM for AWS Storage Gateway, responsible for product and business strategy for the service that allows on-premises workloads to use the AWS cloud. He has also been the CEO and founder of InfiniteIO, StorSpeed, and NuSpeed, an iSCI storage company that Cisco acquired in 2000. Cree joined Cisco as president and GM of its storage router business unit.

The Futurum Group classifies Hammerspace as a global file system, putting it in the same category as CTERA, Nasuni, and Panzura. These vendors use the cloud to provide access to data anywhere. Hammerspace also works with and competes with traditional NAS products, which makes its partner ecosystem crucial.

Future Steps

Hammerspace describes itself as “orchestrating the Next Data Cycle,” and you cannot do that without a strategy for AI and large language models (LLMs). Hammerspace last week released a data architecture for training inference for LLMs within hyperscale environments. The goal of the reference architecture is to enable AI technologies to design a unified data architecture that provides a supercomputing-class parallel file system that is as easy to access as standard NFS.

The reference architecture includes client servers, graphics processing units (GPUs), data storage nodes, and networking. Hammerspace software decouples the file system layer from the storage layer, enabling independent scaling of I/O and IOPS at the data layer. Integrated machine learning (ML) capabilities within the Hammerspace architecture will place related data sets in high-performance, local NVMe storage when the first file from the data set is accessed.

By bolstering its strategic alliances and its AI technology, Hammerspace is checking off two boxes it will need to compete with a plethora of unstructured data vendors, including some of the largest in the industry.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Hammerspace raises $56.7M in funding to unlock business opportunities

Hammerspace Introduces Data Orchestration Solution at NAB 2023

Key Trends in Generative AI – The AI Moment, Episode 1

Author Information

Dave focuses on the rapidly evolving integrated infrastructure and cloud storage markets.

Related Insights
Cloud Enterprise
April 30, 2026

Microsoft’s Xbox Slide Puts Pressure on Cloud and Enterprise Ambitions

Olivier Blanchard, Research Director & Practice Lead, Intelligent Devices at Futurum, analyzes how Microsoft's sharp Xbox contraction is forcing the company to lean harder on cloud and enterprise software as...
Arm AGI CPU
April 30, 2026

Arm AGI CPU Goes to Market via Supermicro and Verda at 2026 OCP EMEA Summit

Brendan Burke, Research Director at Futurum, examines how OCP standards enable Supermicro and Verda to deploy integrated Arm-NVIDIA platforms optimized for agentic AI workloads....
Will Together AI and Adaption Redefine Fine-Tuning for Enterprise AI Teams?
April 30, 2026

Will Together AI and Adaption Redefine Fine-Tuning for Enterprise AI Teams?

Together AI and Adaption have partnered to embed Together Fine-Tuning directly into Adaptive Data, enabling enterprise teams to optimize datasets, fine-tune models, evaluate results, and deploy improvements within a unified...
Will ElevenMusic’s AI Platform Disrupt How Music Is Created and Monetized?
April 30, 2026

Will ElevenMusic’s AI Platform Disrupt How Music Is Created and Monetized?

ElevenLabs launches ElevenMusic, an AI platform letting creators discover, remix, and earn from fully licensed music while addressing copyright concerns that plagued earlier AI generators....
Engineering Determinism: Lovelace AI Seeks to Replace Naive RAG with Enterprise-Scale Context Engines
April 29, 2026

Engineering Determinism: Lovelace AI Seeks to Replace Naive RAG with Enterprise-Scale Context Engines

Brad Shimmin, VP and Practice Lead at Futurum, explores the launch of Lovelace AI and its Elemental platform. Discover how this new enterprise context engine uses knowledge graphs and entity...
From Silicon to Security: Architecting the Autonomous Enterprise at Google Cloud Next 2026
April 29, 2026

From Silicon to Security: Architecting the Autonomous Enterprise at Google Cloud Next 2026

Brad Shimmin, Nick Patience, Brendan Burke, and Fernando Montenegro analyze the Google Cloud Agentic Strategy from Next 2026. They explore how Gemini Enterprise, the Virgo network, and the Wiz integration...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.