Menu

SAP Datasphere

The Six Five team discusses the SAP Datasphere.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: We talk a lot, because a lot of this stuff we talk about is the cool and excited and hyped, but underneath all these architectures, Pat, has got to be some practical usable technologies. And SAP announced what it calls Datasphere. And Datasphere is basically built on its business technology platform. And what it’s really trying to do is create an output for companies more efficiently and effectively using their data. And again, liken this to the generative AI conversation, Pat, you’ve heard me say this many times on this show over the last few weeks, some of the most interesting opportunities to be successful with generative AI is going to be based upon companies being able to expose and utilize the vast proprietary data that lives inside of their business.

Much of this proprietary data, much of this unique usable dataset that better understands customers, workflows, business performance, that’s where it lives. It lives in your systems of record. Systems of record like Oracle, like SAP. So this becomes a big challenge for companies like SAP to say, “How do we develop tools that enable our customers and our users to unlock the power of all the data?” And that’s really what Datasphere is. It’s the SAP data warehouse cloud, it’s a data lake warehouse, river stream, mountain view, valley brook. It is going to enable discovery, modeling, distribution of critical data. And it’s going to do so in a way that both lives inside of SAP, inside of Datasphere, but it’s also going to be ecosystem friendly. And so it’s all about A) being able to use your existing data, B) being able to use data models and curated data sets that come from SAP in their Datasphere marketplace.

And then of course it’s about simplification and ecosystems. So building integrations with Databricks, with Confluent, with DataRobot. And these are going to be really important things going forward because Pat, another thing that’s critical for success is going to be connectivity. You’ve got your structured data, your unstructured data, you got your real-time stream data, you’ve got your legacy and then storage data and having all that data, all of it to be accessible, all of it to be utilized and then being able to be commonly shared and collaborated upon, which is part of the Datasphere solution is really important. In the end, what did we get, Pat? We get a data fabric just like my vests. By the way, I just want to say I was the only guy at that party last week that didn’t have a suit coat on.

Patrick Moorhead: Maybe I think I… Did you see Axe there by chance? Was he invited?

Daniel Newman: Axe is my hero, except for all the bad things he does. But anyway, so seeing a fabric created that basically allows companies to enrich all the data across all their ecosystems and then integrated. And then finally, I guess my last note on Datasphere is the tool of the warehouse, the fabric is all really promising. And of course SAP has hundreds of thousands of customers, so this is not going to be something that will not be received and be importantly and critically looked at. But the advisory and large SIs should be beneficiaries of this. And the company is planning to roll this out with IBM, with Capgemini, with Deloitte, with Ernst and Young, and of course Accenture being the biggest of the bunch.

So it’s early days for this tool, Pat, but I think this is the kind of technology that’s saying how do we take all our data, make it usable, representable, collaborative and accessible, and then connect it to the other core data applications that we’re using and then make it SI-friendly because most of this stuff gets done for large enterprises by SIs. And so this is what SAP’s doing. This is what SAP I think has historically always attempted to do, but in this particular moment when data’s going to become even more critical to apps and AI, it’s an important launch from the company.

Patrick Moorhead: Good breakdown, Daniel – a few adders here. This is very consistent with I think both of our talk tracks related to data about this notion of a data pipeline, all the way from bringing the data in, streaming or batch, any way you want. Let’s clean it up along the entire pipeline, know who gets access to it, so the governance, the privacy, the compliance, and then teeing it up either in a data lake, a data warehouse, a structured database, a non-structured database, and then setting it up ultimately if you want to augment it with AI or analytics and then deploying those models to be run. And by the way, every step of that way, a company called Cloudera that you and I have been covering, they actually do every single thing that Datasphere talks about. And the difference is that SAP is bringing out with looks like Cloudera’s biggest competitors, Databricks, Collibra, Confluent and DataRobot.

So it’s super interesting. What I really like about it is the value that it does bring to the table, its inflows and outflows. So if you’re a customer and you have things going on in Databricks, Collibra, Confluent, DataRobot, you can not only pull in data from those services and use it inside of an SAP environment, but SAP can also elegantly export that information out to these environments as well. It kind of reminds me of these data sharing alliances that SAP and Microsoft have done in the past, but this is with some of these smaller niche players.

And exclamation point Daniel, on what you said about the different types of data, the advantage that SAP has is the operation data is there. So instead of ETL-ing it, do it there. So it’s very similar to a mainframe story as it relates to let’s say financial transactions. Why ETL it out when you can do it right there. We all know you add extra cost by moving data. Anytime you have to move data somewhere, it costs you money and it also opens you up to security risks every time you move data as well. So interesting announcement from these folks. Also, it almost looks like this, I don’t know if it replaces SAP Warehouse Cloud, but it certainly certainly looks like it.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Related Insights
Collapsing the Stack VAST Data’s Bid to Own the AI Data Loop
February 27, 2026

Collapsing the Stack: VAST Data’s Bid to Own the AI Data Loop

Brad Shimmin, Vice President at Futurum, analyzes the VAST Data platform updates from VAST Forward, detailing how the new Policy Engine, Tuning Engine, and Polaris architectures are simplifying the AI...
Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another
February 27, 2026

Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another?

Futurum’s Alastair Cooke shares his insights on new HPE research that finds that only 5% of enterprises are fully prepared for the so-called Great Virtualization Reset, even as two-thirds plan...
Everpure Q4 FY 2026 Revenue Passes $1 Billion as Platform Strategy Scales
February 27, 2026

Everpure Q4 FY 2026 Revenue Passes $1 Billion as Platform Strategy Scales

Futurum Research analyzes Everpure’s Q4 FY 2026 earnings, focusing on enterprise data cloud adoption, hyperscale momentum, and AI infrastructure positioning....
The Storage Era is Dead; Long Live Everpure!
February 25, 2026

Storage Evolved: Everpure Takes on Data Challenges for an AI World

Brad Shimmin, VP and Practice Lead at Futurum, shares his insights on Pure Storage’s rebrand to Everpure as well as its supportive acquisition of 1touch.io, exploring why dropping "Storage" is...
Five9 Q4 FY 2025 Earnings Revenue Beat, AI Momentum, Cash Flow High
February 25, 2026

Five9 Q4 FY 2025 Earnings: Revenue Beat, AI Momentum, Cash Flow High

Keith Kirkpatrick, VP & Research Director, Enterprise Software & Digital Workflows at Futurum, notes Five9’s Q4 FY 2025 AI momentum and record bookings signal strong H2 FY 2026 growth....
February 18, 2026

Hybrid and Multi-Cloud Object Storage for AI – Futurum Signal

AI workloads are reshaping enterprise infrastructure strategy. As organizations scale model training, fine-tuning, and inference across environments, traditional storage...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.