Changing Storage Vendors – Object Storage

“Migration is Hard” – “Data is Heavy”

Data migration is moving data from one storage system to another where the storage system could be on-premises or in a public cloud. Usually with object storage, there is a large amount of data making the migration to be a very difficult prospect. Migration takes time – there is the simple physics of moving bits across a wired or wireless network. The amount of time to move data is simple math: the amount of data to move times the bandwidth supported by the connection. There may be some variability due to unreliable or degraded network connection but normally the best case is used for planning purposes.

Data gravity is the term used that is a result of the amount of data and the complexity and time required to move it. That means large amounts of data tend to remain in place – therefore gravity. I actually heard a vendor marketing person say that “Data is Heavy.” While that initially struck me as incongruous, if you take the data gravity description literally, then you get data is heavy. That does bring to mind the old George Carlin description of important things as “heavy.” A different context, but it sort of fits.

Most data migration is focused on files and NAS systems. This is for good reason – the amount of unstructured data and growth has been in file data. But object storage has grown dramatically with use as large content repositories both with on-premises storage systems and public cloud, most notably S3.

So, what about migration of objects? The migration may be between on-premises and public cloud or from one object storage system to another, presumably a new system. The need to move data due to technology updates seen with block storage systems has been handled with the design of object storage systems in that new nodes may be added, and other nodes retired, with data automatically redistributed according to the erasure code protection scheme.

Object Storage migrations are extremely challenging because of the massive amount of data that is usually involved. The simple math produces time for migration that is beyond reason for most operations. This makes data gravity more pronounced with object storage. You might say data is heavier there, carrying on with the vendor marketing statement. There are some tools available for migration which make the mechanics of migration simple assuming network, accounts, buckets, keys, security, compliance, etc., are all handled. Most of the tools are for cloud migrations. Many of the object storage systems have built-in tiering from object storage to another object storage system, public cloud, or even tape. However, it’s important to note that tiering is not the same as migration, especially when considering moving from one vendor’s storage system to another.

Fundamentally, this leads to the conclusion that selecting an object storage system is a long-term decision. The data will stay there because it is very difficult to migrate that amount of data.

The Evaluator Group has extensive information regarding different storage solutions including product analysis, comparative information, evaluation guides, and the insights and opinions of our Evaluation Group analyst team about how well products meet the necessary criteria. This information should aid in making a determination, including whether the new/replacement object storage system resulting from a vendor change will match the expectations, as well as the current operational usage.

In addition to the information for making the determination regarding functions, an Object Storage Evaluation Guide is also available on the Evaluator Group site, and you’ll find that here:

Object Storage Vendors | Cloud Object Storage

Product analysis of the individual solutions in the different areas noted above are also available to subscribers of the Evaluator Group research information. If you’re not yet a subscriber but would like to explore the benefits, we’d love to talk further. Contact us here.

Disclosure: Evaluator Group, wholly owned by The Futurum Group, is a research and analyst firm that engages or has engaged in research, analysis and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article. Analysis and opinions expressed herein are specific to the analyst individually.

Author Information

Randy Kerns

Randy has written numerous industry articles and papers as an educator and presenter, and he is the author of two books: Planning a Storage Strategy and Information Archiving – Economics and Compliance. The latter is the first book of its kind to explore information archiving in depth. Randy regularly teaches classes on Information Management technologies in the U.S. and Europe.

Related Insights
VAST Data Valuation Triples. Can a Unified Platform Scale AI Globally?
April 22, 2026

VAST Data Valuation Triples. Can a Unified Platform Scale AI Globally?

Brad Shimmin, Vice President & Practice Lead at Futurum, analyzes VAST Data valuation and its AI operating system strategy, questioning whether unified infrastructure can scale amid persistent market fragmentation....
Sovereign Cloud
April 20, 2026

Can NetApp and Google Cloud Redefine Distributed Cloud Data Infrastructure for the AI Era?

NetApp and Google Cloud partnered to deliver unified sovereign cloud infrastructure for government agencies and regulated enterprises, integrating NetApp's data platform into Google Distributed Cloud for compliant, distributed AI solutions....
Hybrid Data
April 20, 2026

Can Cloudera’s Stability Bet Win the Hybrid Data War?

Cloudera's platform enhancements enable hybrid data environments with stability, elastic scaling, and Apache Iceberg interoperability, positioning the company to serve enterprises balancing cloud and on-premises infrastructure....
Can Databricks Out-Iceberg the Competition?
April 20, 2026

Can Databricks Out-Iceberg the Competition?

Brad Shimmin, Research Director at Futurum, analyzes Databricks’ public preview of Apache Iceberg v3, detailing how deletion vectors and the VARIANT data type bring performance parity and interoperability to the...
How Big A Role Will Commvault Play In Securing Agentic AI?
April 17, 2026

How Big A Role Will Commvault Play In Securing Agentic AI?

Fernando Montenegro and Brad Shimmin, VPs at Futurum, analyze Commvault's new offerings—Data Activate, AI Protect, and AI Studio—and their strategic role in securing enterprise agentic AI ecosystems against rising competition....
Can Starburst's AIDA Crack the Enterprise AI Data Access Problem?
April 17, 2026

Can Starburst’s AIDA Crack the Enterprise AI Data Access Problem?

Starburst's AIDA represents a fundamental shift in how enterprises approach AI data access. Rather than centralizing data, agentic AI systems reason across distributed sources, addressing accuracy concerns and accelerating AI...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.