The Six Five team discusses AWS showing its generative AI hand.
If you are interested in watching the full episode you can check it out here.
Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.
Transcript:
Pat Moorhead: So you had Azure and you had Google Cloud come out with some of their answers to generative AI and AWS laid out its generative AI play yesterday in fact. And I had a chance to talk to AWS Vice President and General Manager, Bratin Saha who runs AI/ML and most importantly, he’s a Six Five alumni. We had a conversation with him at their conference that had space and AI and I’ve forgotten re:MARS, apologies. So here’s what they came out with. So here’s the news. So they brought out what’s called Amazon Bedrock, and that’s a limited preview of foundational models, best in breed foundational models from people you would know, like AI 21 Labs, anthropic stability AI, the folks that use stable diffusion, and they brought out their own foundational model. It’s called Titan, and they talked about two of them being in preview. So they’re running best of breed track and they’re running their own. I’m not saying that that Amazon’s aren’t best in breed, we just don’t know enough about them at this point.
The company also said that it went GA on its Trainium based instances talking about delivering up to 50% savings on Trainium costs over any other EC2 instance, which by the way, that includes NVIDIA and Intel. Amazon also went GA on EC2 in two, and this uses the new Inferentia 2 chip out there and they’re claiming up to 40% savings on Trainium costs over any other, sorry, 40% better inference price performance than any comparable EC2 instances and the lowest cost inference for the cloud. And I think that the first statement I think is in comparison to NVIDIA and the second statement I think is in comparison to Google’s TPU. So-
Daniel Newman: Read that off again. Read that off again. I just want to hear that one more time.
Pat Moorhead: Yeah, 40% better inference price performance than other comparable EC2 instances and the lowest cost for inference in the cloud.
Daniel Newman: Is this a swing, Pat?
Pat Moorhead: This is a huge swing Daniel, and I’m going to get into more of that. But then they brought out Code Whisperer, which is essentially this companion for programming where they claim on average participants using Code Whisperer completed tasks 50% faster on average and were 20% more likely to complete them successfully who didn’t use Code Whisperer. Again, huge measured claims that the company is making here. So here’s my net net on this. First of all, this is big, big, big for foundational models. This is more details and more holistic offering than I’ve seen seen yet to this date. Companies bringing out a complete line of best of breed foundational models and two of its own. Bring out a complete line of homegrown training and inference services based on its own silicon with huge claims on lowest cost. By the way, the company did claim highest performance based on a lot of its technologies with the super clusters, it’s networking and I believe that is likely NVIDIA based.
And then finally a coding tool that supports a freaky amount of languages and IDEs. Literally, I don’t know how they did this, but almost every language that I’m aware of and every modern IDE that’s out there – by the way, including Visual Studio from Microsoft. So from what it looks like to me, the company is in a good place. Now, the Trainium and Inferentia 2 inferences, those are GA, but Bedrock is in limited preview, but it did talk about some customers, so it’s not vapor. And by the way, I have never seen AWS bring out anything that ended up being vapor. And you can expect Bedrock to be GA in probably a year based on how long it takes AWS to go from preview to GA. Good showing.
Daniel Newman: Yeah, absolutely, Pat. It would’ve been a ridiculous notion for anybody to count out AWS and Amazon in this play. Remember the amount of data just from Alexa that Amazon has though to play with for its business. And obviously I’m not trying to conflate AWS and data center large language models and open source, but what I’m trying to say is it’s been kind of interesting because different companies have sort of been rolling out their first iterations at different paces and Google with its kind of market position felt a little bit more pressure to show its AI leadership. I don’t think AWS feels exactly the same way about it. I think they’re running their own race a little bit more. I watched Andy Jassy’s interview yesterday kind of looking at it. He said something really profound. He kind of said, “Look, we have about 1% penetration of retail right now and the rest is still brick and mortar.”
And then he said, “10% of IT spending right now is cloud.” And he said, “The other 90% still is on prem.” And he said, “If you believe that those two markets, e-commerce and cloud, are going to expand in the future, then Amazon’s a pretty good bet.” And I’m pointing that out because they did a shareholder letter. He went on CNBC, he doesn’t talk much. But what I guess I’m saying is Amazon has a lot of data, a lot of training data, a lot of reasons to try to create an efficient offering for all of its enterprise clients to be able to utilize large language models and stay on Amazon and AWS platform. Additionally, AWS I think had a little bit of a bone to pick. We’ve talked quite a bit about NVIDIA today, but has a little bit of a bone to pick after the DGX cloud offering and the decision not to offer that.
And it’s sensible. AWS is the only one right now that has GA silicon for training and inference and they’re more and more becoming competition. So yes, you can obviously run EC2 and you can do all the instances are available with NVIDIA, with Gowdy or Havana, with the different offerings from the other silicon makers. But AWS plans to make its own hay in the silicon space. And I’ve been saying that pretty specifically. So when you look at companies that have massive sets of enterprise data in the cloud proprietary data, I don’t think there is a public cloud provider that has more data than AWS. It’s just the largest public cloud provider by a distance right now. And so the ability to turn that into a product that can be utilized by enterprises, government entities, et cetera, is going to be material.
It’s palpable. So I like it. I love the competition, Pat. I’m having a lot of fun kind of watching this. As analysts, this is the best. We opine. We kind of put our thoughts out there on who’s winning. I think AWS was ruled out too soon. I think they’re going to be making a bigger impact. And by the way, watch out for every cloud provider. I mean, you heard Oracle and NVIDIA just put their thoughts in the market this week. Everybody’s going to play, everybody’s make a play and it’s going to be a lot of fun to watch what happens in this space in the coming months.
Pat Moorhead: Yeah, I don’t think anybody with credibility ever had any doubts over AWS. There’s a lot of people out there who can’t disconnect. They look at this as one homogeneous blob, when in fact you have the consumer market, you have the B2B market. First of all, they’re very different. And then you have the B2B market who might be serving a B2C company. So it’s so much more complex doing a Bard search and a Bing chat and saying, “who won the AI war?” is lazy. You might be able to make that from a B2C statement and look at what Google and Microsoft are doing in the consumer space and say, “Where’s Apple?” Apple has not talked about anything. They are clearly behind in terms of at least announcing these things. And-
Daniel Newman: I asked Siri. I asked Siri which one’s better.
Pat Moorhead: Oh gosh, I don’t even use it anymore because it’s so bad. It’s such a waste.
Daniel Newman: Hey, you want to poop on Apple a little bit?
Pat Moorhead: No, I’d like to complete my thought if you don’t mind.
Daniel Newman: Oh, you’re not done?
Pat Moorhead: No, no.
Daniel Newman: Oh.
Pat Moorhead: So probably on your comment about NVIDIA, not only did NVIDIA put an IaaS service that they sold with their own salespeople distributed through CSPs like Google Cloud and Azure and Oracle, but they also brought out NVIDIA AI foundations. These are their own foundational models that you run on top of DGX Cloud. So IaaS and PaaS. So yeah, it doesn’t look like that AWS was interested yet in that type of deal. Maybe we’ll see it, maybe we won’t.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.