What CES Means for Data Centers and 2025 Data Infrastructure Predictions – Six Five Webcast – Infrastructure Matters

What CES Means for Data Centers and 2025 Data Infrastructure Predictions - Six Five Webcast - Infrastructure Matters

On this episode of the Six Five Webcast – Infrastructure Matters, hosts Camberley Bates, Keith Townsend, and Dion Hinchcliffe dive into the announcements from CES 2025 and their implications for data centers and data infrastructure moving forward.

Their discussion covers:

  • Nvidia’s groundbreaking launch at CES 2025, featuring the Project Digits personal AI supercomputer, the GB-200 data center superchip equipped with 72 Blackwell GPUs, and the debut of the Cosmos AI-friendly world model.
  • The evolution and growing significance of scale-out and parallel file systems in facilitating AI workloads within enterprises.
  • Emerging demands for more sophisticated integration and management of AI-focused data pipelines, spotlighting the roles of data lakes, streaming data, and vector databases.
  • The accelerated transition towards more efficient and compact solid-state storage solutions, driven by the intensive needs of AI computing environments.

Watch the video below and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five Webcast – Infrastructure Matters is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Camberley Bates: Good morning, everyone. We are back in 2025. We are at episode number 66 for Infrastructure Matters. I am here with my buds Keith Townsend and Dion Hinchcliffe. Welcome to the new year. Usually, this week is quiet, but it seems like some people decided to throw some things up on the wall. Not to mention a call-out to our friends in California that are dealing with what they’re dealing with, which is very, very, tragic. But we’re going to pivot over here to the technology side of the house and give you guys a break from watching what’s going on with the fires, etcetera. With that, anybody get some skiing in or any fun stuff over the holidays?

Dion Hinchcliffe: Just family.

Camberley Bates: Just family.

Keith Townsend: Yeah. I didn’t get in any skiing. I did bring in the new year camping. We glamped a little bit. That’s not a bad way to bring in the new year.

Camberley Bates: Back in Tennessee?

Keith Townsend: Back in good ole Hohenwald, Tennessee.

Camberley Bates: Well, I can’t say I’m-

Keith Townsend: I got to work on my accent.

Camberley Bates: I can’t say I glamped. I did camp, tent camp, and etcetera, biked and hiked in Death Valley. If you haven’t been there, it’s a fabulous place to go. Especially at Christmastime, when the weather is good, very good.

Keith Townsend: It’s true.

Camberley Bates: We are going to go into this week, big show in Vegas. Right off the new year is CES, Consumer Electronics Show. Usually, this is not part of the data center infrastructure guys, but nowadays, it is because so much of what we deliver as services, etcetera, is through these devices, etcetera. It’s now all blending in. Dion’s got a whole lot to talk about, especially with how Nvidia stole the show there again.

Dion Hinchcliffe: That’s right. While none of us were there, we did have folks who were there. Including our CEO Daniel Newman, who met I believe with Jensen. It was an amazing show. He had a whole raft of announcements. Some of them were consumer and not really related to this show. But unusually for CES, there were some absolutely very much enterprise relevant announcements. The first significant one was Project Digits. It’s a personal AI supercomputer that has 1000-times the power of a regular laptop. It’s powered by the new Grace Hopper chip. You’re going to see it in data centers. There was already people stacking them up as high as they could get them before they started to melt. The compute density of this thing, and it’s not really rack-mountable, but it’s the fastest way to get a big model running anywhere you want it. Anywhere you want to put almost the biggest model you can think of, it’ll run, and it’s about the size of the new Apple Mac Mini. It’s very small, intense compute density. You’ll see it really helping development shops and people actually operationally run some AI. The price point, it’s $3000 for 1000-times more power than you can ever get out of a laptop. That one’s pretty cool, but that one’s not super enterprise-y, although you will see it absolutely in data centers and in businesses around the world, in my opinion. Jensen came out with a full-sized wafer, showing that the GB-200, the data center superchip that has 72 Blackwell GPUs on it really exists. It’s in production. He was walking around on stage with this single-wafer that has 72 interconnected Blackwell GPUs that will for the basis of their massive compute. If you want to go all NVIDIA and get the biggest configuration that they have, you get stacks of this giant wafer that’s 72 Blackwell-

Camberley Bates: It’s the wafer that’s this big, or whatever it is. You’re saying there’s 72 Black-

Dion Hinchcliffe: That big. It’s like three-feet across. It’s the biggest wafer I’ve ever seen. On it, is 72 connected Blackwells. They were actually made that way on the wafer. On-wafer interconnect.

Camberley Bates: They’re going to put this in one computer then?

Dion Hinchcliffe: Yes.

Camberley Bates: Like the mainframe?

Keith Townsend: An AI mainframe I think IBM did.

Camberley Bates: An AI mainframe.

Keith Townsend: Yeah, IBM has something to say about AI mainframes. To help give some perspective here, the DGX is one Blackwell chip, basically. That one Blackwell chip has a petaflop of compute capability up to FP4, with the FP4 standard. That enables you … In real work, that means that it can comfortably handle a 200 million parameter model. An Intel CPU type deal, it goes up to about … I’m sorry, 200 billion parameter model. An Intel CPU will handle about a 75 billion parameter model. There’s GB200 72-times that raw compute performance. So the networking, the memory, everything that needs to be done to scale that out on a wafer, that’s some pretty good computer science.

Camberley Bates: Way back when, if we’re going to talk about the mainframe, of course I will, some analysts made this prediction of how many mainframes would be sold. I think it was something less than 20, maybe it was 15 or something. Who in the heck is going to buy this? I’m not going to say they’re only going to sell 15 of them. This’ll be used for the languages, the large languages?

Keith Townsend: Yeah. This is all still just training. There’s a question … We’ll get to our predictions over the next few weeks, but I do have a prediction around AI is going in an enterprise. Enterprise teams are not going to do massive training. They’ll do retraining. This is for the hyperscalers, the Metas of the world, all these companies, Groq, SOZAI.

Dion Hinchcliffe: Microsoft has already announced that they have Blackwell in the data center, and you can get special instances, you can specially select that. This chip is intended for those massive installations. Microsoft, just this week, announced an $80 billion in capex they’re going to invest this year, in 2025, in their data centers. Of course, a lot of that spend will be exactly on this particular chip.

Keith Townsend: It does bring up questions around that we’ve talked about over the past 2024 around heating, cooling. Power, cooling, how do you keep these things cool? That is going to be one of the biggest problems. Of course, how are you going to power them? We don’t have enough power now. It’s going to take four to five years to bring on new nuclear reactors. Companies like Microsoft are buying coal reactors and putting their data centers next to these coal reactors, and taking up all the power from those reactors. It’s a fascinating arms race around AI.

Camberley Bates: Wow. I’m just like you, I’ll need to go online and take a look at it because I missed it this week. I want to see this chip. You can’t even call it a chip, can you? Really, to call that thing a chip if it’s-

Dion Hinchcliffe: No, it’s all on its own.

Keith Townsend: Yeah. Is something three-feet wide a chip?

Dion Hinchcliffe: For sure.

Camberley Bates: It’s a wafer.

Keith Townsend: It’s a wafer.

Camberley Bates: It’s a big, huge, massive wafer.

Dion Hinchcliffe: Yes.

Camberley Bates: Okay. What else came from CES?

Dion Hinchcliffe: Then finally, NVIDIA announced Cosmos is their new AI-friendly world model. With a lot of these large language models, they don’t really have a good model of the world. They can make a lot of inferences that the trained up on a lot of incidental information about how the world works. But there’s consequently a real need for world models and it’s supposed to be the most enterprise-ready capable way so you can build true AI-powered digital twins that understand all the physics, and all the geometries, and everything all about the world. They had some pretty cool demonstrations of AIs wandering around inside Cosmos, interacting with that world as if they were in the real world. It was pretty cool. We’ll see that with scholastic, we’ll see that in a lot of simulation, R&D, manufacturing, healthcare, things like that. Logistics.

Camberley Bates: When we get to those world models, I’m assuming that we’re going to need these wafers.

Dion Hinchcliffe: Yes. They always consume massive amounts of power.

Camberley Bates: Well, yes.

Keith Townsend: Yeah, it’s leading credence that maybe we are in a simulation because we’re building equipment that can run simulations. It is hilarious. This is an amazing time to be in technology.

Dion Hinchcliffe: It really is.

Camberley Bates: That’s amazing. Well, on the flip side of all this technology that we’re building and the things that NVIDIA’s doing, there are some new, you’re calling them AI export rules, that are potentially coming out, or seeing pretty strong coming out from the Biden Administration even before we pass over the baton to the next administration. Dion, you want to walk through what’s going on? I understand that there is a massive amount of conversation on this because it’s giving some huge angst to companies.

Dion Hinchcliffe: It really is. The Biden Administration has proposed something called the Export Control Framework for Artificial Intelligence Diffusion. The intent is ostensibly a good one, which is to prevent very powerful AI to getting into the hands of our enemies and being used against us. However, companies like Oracle have come out very publicly against it. Saying, I quote, “It is one of the most destructive rules to ever hit the US technology industry.” They warned a few weeks ago, but now it looks it is gone through the interim final rule stage. It is not public. I cannot get access to this because of how it’s supposed to be enforced his highly secret. It’s one to watch. The Semiconductor Industry Association didn’t come out swinging so hard as Oracle, but is very much against it. The prediction is is that it would reduce GPU sales in the United States up to 80%. I think some of that is debatable.

The thing is it’s hard to tell, it’s hard to understand what is in it and how it works, it’s hard to judge it. It’s basically what the government did with cryptography way back in the day so that our enemies couldn’t keep their secrets from us. This is something similar, this keeping very power AI out of the hands of our enemies. It’s unclear what will happen, if it will make it to actual official enforcement before the new administration comes in. Who very, very well could get rid of it after that. But anyway, it was very much the topic of discussion this week in semiconductor and in AI circles. It’s on the fast-track and came out of nowhere. We haven’t been tracking this and this is what we do. It’s very interesting to see what will happen. It’s something that certainly our infrastructure friends in the industry will have to watch very closely and try to influence, make sure there’s not overreach in it.

Camberley Bates: Is this specifically on GPUs, or is it on the large language models? What does it cover?

Dion Hinchcliffe: It’s targeted at high risk uses and it restricts very high volume users of GPUs. I’m still going through everything. It’s trying to control the quantities of GPUs that come out of the United States. It’s a little bit less on the models, because what they’re really trying to do is control the ability to even run the models.

Keith Townsend: Yeah. This has been highly debated for quite some time. There’s already GPU controls-

Dion Hinchcliffe: Yes, there are.

Keith Townsend: … on who we can export GPUs out of. China came out with, I forget the name of the model, but that hit the news last week. Was a model that works four-times faster than, I think it was Llama or the model that it mimics. I think the cat is already out of the bag with some of this. It’s a complex question because there is this debate on what controls Congress and the government wants to pass. Because I think, if I was to guess looking at news or at TikTok, there’s probably pretty good bipartisan support for something like this. The government and tech and tech leaders seem to be at odds at what’s best for technology versus what’s best for US national interest, security interest.

Dion Hinchcliffe: Yeah. Well, it identifies 20 what are called artificial intelligence authorized countries, and they’re the ones that can get hardware that can run frontier models. They’re the ones that will be able to develop AI. Because the concern is, even if we keep our models away from them, the bad actors, they can still build their own. But what if we take that power away, too? There’s only 20 countries that are on that authorized list, and everyone else has very sharply reduced access to any powerful GPUs using this export rule.

Camberley Bates: This is coming out of the Department of Commerce, correct? Because they’re the import-export folks, and those organizations.

Dion Hinchcliffe: It is very complicated. There’s a lot of agencies involved in it. Obviously, it would have to be Commerce. But there’s also the Department of Homeland Security which is the sponsor of it, and things like that. Industry and security.

Keith Townsend: This is something to watch, even from change to administration to administration. The folks in Congress are not the friends of technologists right now.

Dion Hinchcliffe: Yeah, right.

Camberley Bates: Well, it depends. I think on the way out.

Dion Hinchcliffe: It’s a fast-developing story and one everyone should be watching in the tech business.

Camberley Bates: Coming off of the current administration area, which I’m going to try to stay away from that and everything that’s going on. We did have, in my world, the data infrastructure world, there’s this company, a lot of people don’t know who they are. But DDN, DataDirect Networks folks, they’ve been privately held by Alex and Paul. Alex is a very flamboyant Frenchman, I believe he’s French. They’ve been darlings in the HPC market. They have traditionally only focused on HPC market, or the RID market. With AI coming on board, they have just taken off like crazy. I know from talking to some folks, all they’ve been doing is pouring money into just ship boxes, which means that’s all the inventory and everything else that you’re moving.

They’ve just taken a $300 million investment from Blackstone, which is a private equity, actually publicly traded company if you want to go look at them. That brings their valuation to five billion, which I think seems like that would be a little low when you think about comparison to some of the other guys that have gone up. But we’ll see. I’m sure what’s happening here is that the co-owners are holding their ownership and the power of the company, because it’s just been these two guys that have owned it, and it’s extremely profitable, etcetera, etcetera. It’s very exciting to see another, yet again, another hardware vendor and software vendor that has taken on some big dollars into this market. It’s not just about software anymore, it’s definitely the-

Dion Hinchcliffe: No, hardware is back big time. Of course, that’s a capital-intensive game, and you need to be very well funded for the long term to even play these days.

Camberley Bates: Yeah. They’ve been very, very close with NVIDIA all along because this has been their market. This is all they’ve done. Yes, they bought a company called Tintri five, six years ago, and they looked like they were starting to try to expand into the traditional data center space. But AI taking off, there’s really no need for them to do that in terms of their valuation or who the company is. They just exploded on that side. It’s been very, very cool, and a big congratulations to these guys out there.

Dion Hinchcliffe: Yes.

Camberley Bates: Yeah. That goes along with, I guess the other thing we were going to do, we were going to take this session and really go through our predictions for 2025. But with all the things that were going on, we said we didn’t have time. What we’re going to do is split it over the next few weeks, for each of us to take a section and talk about it. I had mine ready to go, which is of course my first two issues. They’re more granular in terms of predictions, because yes, the market’s going to grow. Yes, data’s going to grow. Yes, all these things are going to happen. But it’s like what are the specific things that we’re going to start to see that probably weren’t part of on our venue in the past? When I sat down and looked at that I said one of the biggest things I think is going to happen is the rise of scale-out file and their understanding of what we need, the need for parallel files systems and requirements for AI. To this time, it’s always been a secondary market. It’s been a market for R&D. It’s been a market for the big labs, etcetera. That’s who it was because these were so … Parallel file systems are not the easiest things to manage, too. You had, the ones that I’m thinking about, would be Lester, would be GPFS or IBM Scale, would be BGFS, some of the guys that are out there.

It’s just not something that somebody wants to go play on. But as we’re seeing each and every one of the vendors are bringing out something to say, “Parallel scale-out.” Not just scale-out, but parallel file systems. Dell is working on something. You’re hearing a vast move in some of those areas. They have those relationships, some of them have those relationships with those systems already. But we’re also seeing Hammerspaces being put on top of some of these file systems in order to bring a parallel-like activity that’s out there. Basically what that means is that the enterprise that’s never paid attention to this guy, unless you were a big, big R&D facility, is all of a sudden saying, “I can’t get my very high speed file system, such as PowerScale from Dell or NetApp, to perform at the level I’ve got to make it perform at for AI. What else is on here?

Dion Hinchcliffe: Those file systems are not designed for that. We’re dealing in entirely new levels, petabytes of training data and things like that. It’s crazy.

Camberley Bates: This next year, I think that conversation is going to rise up. I think the enterprise is going to start saying, “Okay, so what is this? Tell me more about it. What is the difference between A and B? Why can’t I do this with this?” That’s, to me, is the big rise of that. The second piece of that is the understanding and the tuning-in of where objects and file belong in that data pipeline for the AI initiative, where that’s going. We’ve got the data lake that needs to exist. How is that going to exist in the data center? What’s that got to look like? How am I going to afford that? Because we’re hearing from the enterprise folks that there’s a huge sticker shock in terms of what this is going to look like potentially. We’ve got to look at different ways for storing that because if this side is too expensive, I can’t justify the ROI here. But can I not justify the ROI here? Somewhere along the line, there is invention and opportunity here to understand what’s going to have to happen from these object file data lakes.

Dion Hinchcliffe: Well, I think there’s a whole ROI problem in general. I’m looking at all the venture capital firms that are putting the graphs together of the expenditure on AI and the expected profits over the next 10 years, and they don’t add up. You have to do it, you have to invest in AI, or you’re out of the game all together. But even $80 billion that Microsoft is spending on infrastructure, which is going to involve all those things you’re talking about, massive file systems of even greater amounts of training data and all of these things, where’s the end game? Are we going to see exponential growth and profits taken from all these investments? If we don’t see that, I’m not sure … It seems like the industry’s not heading in a sustainable direction, so it’s interesting.

Keith Townsend: Yeah. I think in my experience with file systems and enterprise, it is very, very, very difficult to get enterprises to change file systems. When you’re talking about where they’re stored, changing user habits. And then this case, we’re probably talking about data scientists. We’re probably talking about researchers in the enterprise, and people who are looking to get the data into AI systems. The challenge is that the data exists on filers, and NAS systems, and it’s distributed, it’s in a public cloud. Trying to, without interrupting users’ workflows, getting this data into some type of parallel file system that’s usable, and that can use RAG, etcetera, etcetera. I think that will be the challenge of 2025. The need for performance we’ll obviously see, but the smart folks coming up with solutions that make this as seamless as a transition as possible is the challenge.

Camberley Bates: Then throw in two of the technologies that are part of this entire AI capability. One is streaming data. If I’m going to do realtime AI with streaming data, etcetera, all of a sudden, it brings more complexity to what we’re doing to the pipeline. We hear the different folks about, “How do you adopt the streaming data that’s coming in for those decisions?” The second piece of it, which is already part of the environment, is the vector databases. As the vector databases get indexed, they grow. They grow significantly. How do I deal with that? There’s probably invention to be had in those areas from an engineering standpoint, as we move forward with these next few years and addressing, as you were saying, the ROI, etcetera.

The last two that I had on my list is that, from the enterprise standpoint, we’re going to see a continuing decrease of any hard drives there, and moving to QLC. That’s been driving revenue for these guys and is going to continue to drive revenue for the major vendors as they roll out. We’ll see less and less of those blended boxes that are in there as they come of age, and that will fuel the revenue in those spaces. The fourth area that I’ll bring up is the area of data protection and cybersecurity. That does not leave. Dion, as you well pointed out in your CIO Insights, it is still number one on the list. We will still continue to look at what do we need to change for data protection in order to improve the recoverability and the speed of recoverability. There’s two pieces there. It’s not just making sure that we’re safe, but now what they’re looking at is how fast can I restore when I get hit. Then the other piece of that is how do I protect not only the secondary data, but the primary data that’s been going on. We’ll see the increase of technology, and that will become a competitive advantage for those vendors that have built into their primary storage those capabilities to protect those devices.

Keith Townsend: Yeah. Then combining two ideas that you are looking forward to, how do you do this with AI? With AI and streaming data, and now I want AI when this secondary data now becomes production data. And the need to protect that from ransomware, from cyber threats in real time, and be able to recover in real time. I think we’re going to see, again, your premise that we’re going to see the reduction of hard drives. We’re going to need much better IO, so much better and denser IO, so 27 terabytes. I don’t know if I’d ever say this. A 27 terabyte hard drive just is not enough. It’s not enough and it’s too slow, compared to the 122 terabytes we’re seeing out of the major SSDs, and going to 256 terabytes of SSDs this year. The need to just have faster and bigger IO, it’s an amazing set of challenges going into the new year.

Dion Hinchcliffe: Well, GPUs, they can take so much data, is how do you get the networks and the hard drives to deliver information as fast as they can process it? That’s the core problem right there. You basically have to match the data throughput of the GPUs across the entire data center. That’s amazing. They’re going to try to do it.

Keith Townsend: Yeah. Obviously, we’re going to see opportunities for folks like Cisco. The Tech Field Day folks will be at Cisco Live EU. I think that’s next month from when we’re recording this. Obviously, HPE with its acquisition of Juniper and ability to have these tightly integrated stacks. Dell has its network stack and its ability to build these NVIDIA blessed stacks, which Nvidia has competitive networking. As you start to tear this apart and we start talking about vendors, vendor relationships, the ecosystem, it makes for a really interesting year of how do we get these engineered systems that we want from our favorite vendors that are blessed by the people who are controlling AI, which is basically NVIDIA.

Camberley Bates: Then all of our US customers should consider themselves blessed because they won’t fall underneath these export control issues. If we’re going to tie all these pieces together, they’ll have the freedom to acquire whatever they want to acquire, and be able to build on those systems. Wow. Well, that brings us almost to the end of the hour. Any other final comments coming out of any kind of cool stuff you guys got? Then we’ll wrap up here.

Keith Townsend: I just want to give a hat-tip to our friends at Kamiwaza. They’ve been hard fighting. Luke and Matt have been hard plugging working with our folks in the Signal65 Lab, doing some really interesting AI. They received the $11 million in funding from some Seattle-based venture capitalist. This will enable them to continue what the venture capitalist was calling the Docker of AI. Stay tuned from some insights from our Signal65 Labs from some of the work we’ve used Kamiwaza to help us produce some really interesting research.

Camberley Bates: Did you just say they’re located in my backyard?

Keith Townsend: They’re located in your backyard. I think Luke’s home overlooks The Sisters. I saw it as he was a rehab man. He worked for a different enterprise data company. Yeah.

Camberley Bates: Okay. Well, I have to look him up then. Very cool. Alrighty, guys. Thank you very much for tuning in. Don’t forget to look, share, all those good things that we ask you to do as we continue to bring you Infrastructure Matters. Probably the most interesting podcast that you will listen to all week long. All right, have a great week.

Author Information

Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.

Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.

Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.

She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.

Dion Hinchcliffe is a distinguished thought leader, IT expert, and enterprise architect, celebrated for his strategic advisory with Fortune 500 and Global 2000 companies. With over 25 years of experience, Dion works with the leadership teams of top enterprises, as well as leading tech companies, in bridging the gap between business and technology, focusing on enterprise AI, IT management, cloud computing, and digital business. He is a sought-after keynote speaker, industry analyst, and author, known for his insightful and in-depth contributions to digital strategy, IT topics, and digital transformation. Dion’s influence is particularly notable in the CIO community, where he engages actively with CIO roundtables and has been ranked numerous times as one of the top global influencers of Chief Information Officers. He also serves as an executive fellow at the SDA Bocconi Center for Digital Strategies.

Keith Townsend is a technology management consultant with more than 20 years of related experience in designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and an MS in information technology from DePaul University. He is the President of the CTO Advisor, part of The Futurum Group.

SHARE:

Latest Insights:

Novin Kaihani from Intel joins Six Five hosts to discuss the transformative impact of Intel vPro on IT strategies, backed by real-world examples and comprehensive research from Forrester Consulting.
Messaging Growth and Cost Discipline Drive Twilio’s Q4 FY 2024 Profitability Gains
Keith Kirkpatrick highlights Twilio’s Q4 FY 2024 performance driven by messaging growth, AI innovation, and strong profitability gains.
Strong Demand From Webscale and Enterprise Segments Positions Cisco for Continued AI-Driven Growth
Ron Westfall, Research Director at The Futurum Group, shares insights on Cisco’s Q2 FY 2025 results, focusing on AI infrastructure growth, Splunk’s impact on security, and innovations like AI PODs and HyperFabric driving future opportunities.
Major Partnership Sees Databricks Offered as a First-Party Data Service; Aims to Modernize SAP Data Access and Accelerate AI Adoption Through Business Data Cloud
Nick Patience, AI Practice Lead at The Futurum Group, examines the strategic partnership between SAP and Databricks that combines SAP's enterprise data assets with Databricks' data platform capabilities through SAP Business Data Cloud, marking a significant shift in enterprise data accessibility and AI innovation.

Thank you, we received your request, a member of our team will be in contact with you.