On this episode of The Six Five – On The Road, hosts Daniel Newman and Patrick Moorhead welcome Chetan Kapoor, Director at AWS EC2 for a conversation on AWS Generative AI Infrastructure announced at AWS re:Invent.
Their discussion covers:
- A recap of the announcements made on AWS Generative AI Infrastructure announced at AWS re:Invent
- AWS’s strategic partnership with NVIDIA
- An overview of AWS’s purpose-built accelerator, Trainium2
- How these key AWS differentiators compare to alternative options
Be sure to subscribe to The Six Five Webcast, so you never miss an episode.
Watch the video here:
Or Listen to the full audio here:
Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Patrick Moorhead: The Six Five is on the road in Las Vegas, AWS re:Invent 2023. We are having some incredible discussions with executives and SMEs for AWS. And Dan, there’s a couple trends here. I mean, a lot and lot of interesting compute. We’re talking serverless and of course, what’s the technology event over the last year without talking about AI?
Daniel Newman: AI.
Patrick Moorhead: Imagine that.
Daniel Newman: What are you talking about?
Patrick Moorhead: Exactly.
Daniel Newman: It’s like you kind of start to have this banter, this joke of: how long can you get into any conversation about any topic? It’s like you’re talking about Edge, but the third sentence, AI. You’re talking about serverless, but the fifth sentence is about AI. And that reflects the trend line of 2023 and all the excitement, all the energy, all the opportunity, and all the growth. And I guess the A in AWS might stand for Amazon, but it seems to also be standing these days for AI.
Patrick Moorhead: Yeah, it really does. And we’re here potting at the future frequency booth, or is it a container? I think a shipment container.
Daniel Newman: Well, there’s no irony for me that we are in a container. I just wonder how they’re going to pack us up and deploy us like a workload. We’ve been doing that between here, the Venetian, and the Palazzo. The Mandalay Bay. We’re like this human research analysis workload, and we’re being spun up. And then hopefully the Dan and Pat bots end up coming out the other end with some really thoughtful analysis. And maybe we can talk to some guests about that.
Patrick Moorhead: Yeah, why don’t we dive right in here. I’d like to introduce Chetan. Great to see you again.
Chetan Kapoor: Hey, guys.
Patrick Moorhead: Yeah, it’s funny, I think 10 years ago I had a conversation with a reporter who said, “Hey, we’re not going to cover chips anymore, compute, because it’s not exciting. And our readers-“
Chetan Kapoor: It’s totally underrated.
Patrick Moorhead: I mean, look at where we are today. I mean, I get that software makes everything happen, rules the world, but software has to run on something. It has to run on the highest performance, high efficiency infrastructure out there. And you, essentially, running most of the EC2 out there on the compute, puts you in a very, very interesting role. So welcome to the show.
Chetan Kapoor: Yeah, thanks. It’s great to be here. And in my role, at least over the last six, seven years, the amount of growth we have seen in the deep learning space, and now GenAI has just been incredible. I still remember having conversation with some analysts about five, six years ago. They were like, you know what? Infrastructure is not differentiated. A chip here is a chip there. A GPU here is a GPU there. And oh boy, we have changed that conversation quite a bit.
Patrick Moorhead: By way for the record, that was not Dan or Pat on there. In fact, what I always say is you allow yourself to be commoditized over time. And I saw this in a few markets that I operated in back in the nineties. And you have to put forth a great effort to differentiate, and that is exactly what you have done.
Chetan Kapoor: And we have been very thoughtful about this. So we do have the luxury of having the largest enterprises running on AWS. And instead of building just general purpose CPUs or ML chips that are good for each and all workload, we are very, very targeted. We’re like, okay, what are the key workloads that customers are running on AWS, and how can we better serve them? And we provide similar feedback to our silicon partners also. Like, okay, this is the kind of requirements we are seeing from either core frequency side, core count side, bandwidth side, things like that. But for us, it is also important to have a diversity of choice in a portfolio. There’ll be large enterprises that will have a strong preference for a vendor A versus vendor B. And we want to be there to support their requirements. But they’re open to an alternative, we want to talk to them about the differentiation we have.
Daniel Newman: The sort of ethos of Amazon AWS has always been very customer-centric, customer obsessed. Very much meet the customer. But you also know that the most innovative companies on the planet historically have sometimes had a better sense of where the market is going than the customer. And so you always have to kind of toe that line of like, hey, we’re going to invent the next thing we’re going to build what’s more valuable. We’re going to think about maybe things like sustainability and power a little bit differently. Or we might think about architectures where, hey, yeah, right now everything might run on architecture A, but in three years the way certain trend lines are going, it will work on B and we can then deliver economics. And Amazon’s always been about economics too and helping people get what they need for lower price. I mean, that’s at the very core of the business. But listen, you had a big week, and we could talk across a lot of infrastructure announcements and silicon. But Pat, I think, because we only have so much time with Chetan, that we should focus on generative AI because no one wants to talk about that. So we can be the one podcast that can actually talk about generative AI.
Patrick Moorhead: Just hit this up.
Daniel Newman: I just want to make sure, because I feel like this show just lacked real coverage of generative AI. Maybe we could have Q do it. Let’s Q this up. We should talk about that offline, by the way. I had a great idea. We had some good ideas kicked around about Q, but I can’t tell. Sorry, buddy. I can’t tell you. You can’t find out.
Patrick Moorhead: Sorry. These are insider conversations. But all right, run us down the generative AI infrastructure announcements.
Chetan Kapoor: So, massive week for us. So first and foremost, let’s start off with our collaboration announcement with NVIDIA, right? So we are the first cloud provider to bring NVIDIA GPUs to the market. And I’ve personally been involved over the last six, seven years bringing their latest and greatest data center GPUs to the cloud. Going back to V100s in 2017, and A100s and H100s, we were the first major cloud provider to actually make this available. So there has been a false narrative in the market that AWS and NVIDIA haven’t been collaborating over the last year, year and a half. And that’s just absolutely not true. So with this announcement, we talked about three key things. The first one is that we are picking up NVIDIA’s Grace Hopper 200 Superchip, which combines their Grace CPU along with the Hopper GPU. But more importantly, it actually connects a rack full of these servers via their NVIDIA’s NVLink technology. That is super, super important. I’ll share a little bit more about why. And it comes down to the fact that these GenAI models are just exploding in size. There will be some back and forth where they will grow and shrink and things like that. We’re going to see that. But generally speaking, the trend is saying that these models will get larger. And the amount of GPU memory that you have in an eight GPU server is just not enough. So that’s why you need a rack level solution that actually gives you the next 5x, 7x increase in GPU memory. So it’s specifically the GH200 variant with the NVLink interconnect across the entire rack that we are going to be partnering really closely with. And our expectation is we’re going to be the first major CSP to kind of bring that to the market. So that’s one.
The second one was that NVIDIA is actually going to be a big user of this infrastructure. So they have a need for this type of compute for their own research and development activities. So they’re going to be partnering with us and consuming large quantities. And Jensen talked about this in his keynote, like 16,000 GPU cluster that we’re going to be standing up that NVIDIA is going to be consuming. And then lastly, they have a managed service called DGX Cloud that they’re going to be running on this infrastructure that we’re going to deploy over the next few months. So that was on the NVIDIA side. Again, close partnership with them has been just more deeper engagement to kind of bring additional value to our customers. Another big announcement was around Trainium 2 where again, we were talking about differentiation of the infrastructure layer. And this is where Trainium 2 will be our follow on to our existing product, 4x more compute capabilities. We’re going to deploy them in much larger scales, and we’re going to have a key customer in Anthropic building their next generation foundational models on Trainium 2. So a bunch of other announcements though, but at least from my side of the business, those were the top announcements.
Daniel Newman: Is that it? I mean, that’s it?
Chetan Kapoor: I know.
Patrick Moorhead: Is that it? I think we’re done.
Chetan Kapoor: Oh, yeah.
Patrick Moorhead: Hey, one thing that stuck out to me is that your DGX Cloud implementation is going to be connected by EFA.
Chetan Kapoor: Yes.
Patrick Moorhead: Can you talk about why that’s important?
Chetan Kapoor: Oh, that’s super important. So we have spent the last three or four months discussing and going back and forth with NVIDIA. They obviously have a different architecture in mind with respect to how they would like to connect all these racks together. And from our standpoint, EFA is a really, really core component of our overall strategy in supporting this large scale distributed training workloads. It has been fabulous in helping us set up massive clusters of GPUs in the past and gives us the scalability and the flexibility we need. So instead of having a rigid deployment that is tied to a particular cluster, we have the ability to scale up an EFA based cluster, which is leveraging ethernet as a fabric very seamlessly. So we can support customers who want 16 GPUs all the way to like 12,000 GPUs on the same fabric seamlessly. So it’s a core differentiator. It’s highly scalable, it’s really cost-effective because we are building it, right? We have the Nitro cards that run the EFA, we make our own switch boxes, we make our own switch racks. And that’s, from a cost economic scalability perspective, it’s just very, very fundamental to our infrastructure. Right. Yeah.
Patrick Moorhead: No, that’s great. Can you talk a little bit about the strategy? I mean you do your own silicon, which is very competitive. You have merchant silicon as well. How do you strategize how you pull those together if it’s, hey, there’s maybe a missing piece here. Or hey, I think I can differentiate here. What’s your uber strategy across that?
Chetan Kapoor: Yeah, no, that’s a great question, Pat. So at the top level, we want to make sure there’s differentiation across all swim lanes. So if you look at-
Patrick Moorhead: And swim lane meaning workload?
Chetan Kapoor: So swim lane means workloads and also what we are doing with our partners. So the instances we have with Intel or EMD as an example on the CPU side, and then we have Graviton on our side. Similarly, we have NVIDIA GPUs, and we have Trainium and Inferentia engines. So on the CPU side, if you look at what we announced with Intel and EMD a few months ago with Sapphire Rapids and Genoa, we were able to work with them to actually customize their CPU designs and actually tailor it specifically to the type of workloads we are seeing. So on the Sapphire Rapids case, the CPU that we have launched with, it’s only available in AWS. It’s not the same CPU that other CSPs are going to pick up or is going to be broadly available. And that’s tailored for the kind of workflows we are seeing. So again, there’s an advantage there. But at the same time, if you look at our investments in Graviton, Graviton is great for ARM-based workloads. It doesn’t do a good job at supporting Windows workloads. So if you have a dependency on Windows, you’ll likely stick with that x86 based solution. So to answer your question, from a top level strategy perspective, and this is something that is very, very unique that AWS is able to pull off, based on our scale and the level of investment, we can afford to have an open ecosystem by leveraging third party merchant silicon. But at the same time, start to integrate from the bottoms up using silicon servers and rack deployments. So the key strategy is, we want to have choice in a portfolio, but at the same time we want to make sure we are not just stamping same products, just multiple different ways. There needs to be differentiation either in performance, price performance, just availability or certain type of workloads.
Patrick Moorhead: Yeah. And I think it’s important for customers, especially if they’re less sophisticated, for AWS to guide them on what type of solution they should have. I think could be very helpful and appreciated. Because the large companies, I mean first of all they have research groups, but they also have a lot of people to do operations still. And they devote entire teams to AWS. So they know this, and there’s a lot of meetings and a lot of pre-meetings and roadmaps and things like that. But for those smaller customers, I know that’s sometimes questions I get. Which is: hey, when should I use Graviton? Or, when should I use Inferentia? So again, I think it’s a good challenge to have that deserves demand.
Chetan Kapoor: Yeah, diversity does add complexity for sure.
Daniel Newman: It’s very interesting too, because when you ask the question about the strategy, obviously the broader market and those out there listening are, there’s the two parts of it. It’s what you’re kind of talking about. It’s the workload sophistication, and which is the right silicon for the right workload? Eventually, I’m sure you’ll be able to just ask Q, and he’ll be able do that for you.
Patrick Moorhead: Yes. That was part of the plan.
Daniel Newman: But the longer, the more immediate thing and what the press is all focused on is DGX, cloud. Did you do it? Did you not do it? You’re back, you’re in, you’re out. And obviously the EFA strategy, you can start to see how the meter runs. And so you’re benefiting, which has always been a little bit of the concern is how much benefit goes to AWS, how much benefit goes… And again, any partnership, there’s a little bit of dissatisfaction, but it can’t be a lot of dissatisfaction.
Chetan Kapoor: That’s always some give and takes.
Daniel Newman: But the other thing too that I say a lot, and when I get asked by business and financial press, and they want to sensationalize these kinds of things. And I’m always like, look, the market is growing as a whole. AI is creating a new, it’s a new TAM. The new TAM is much bigger. So what I’m saying is: there really is no reason that a partner with the capabilities and market share that AWS has can’t actually be a place for Intel to grow, a place for AMD to grow, and a place for you to grow your own instances. And by the way, everybody wins. And of course the market needs to be rewarded. Your investors need to be rewarded with higher margins, better returns, and you can’t ignore the opportunities to do that while delivering value to the customer. So it’s a very, it’s nuanced. But it really doesn’t have to be the “or”.
Patrick Moorhead: It doesn’t. Absolutely.
Daniel Newman: The theme of this event has been so much, people are like, is it this or is it this? I was like, no, it’s “and”. It’s both. And you can do both. So let’s double down. I know we talked a lot about strategy and sorry about that somewhat, you like to say, editorial there, but you know I can’t help myself sometimes.
Patrick Moorhead: I mean we’re analysts. It’s what we do.
Daniel Newman: I’m full of opinions, but I definitely want to hear yours, Chetan. You guys had some great advancements with Trainium 2. Really big. And I mean you’re talking, I believe Adam said on stage something about training trillion parameter models on Trainium.
Chetan Kapoor: Correct.
Patrick Moorhead: By the way, and he said highest performance.
Daniel Newman: Highest performance training.
Patrick Moorhead: At the lowest energy. So it was like, okay. And I love demonstrables, because I know exactly the right question to ask.
Daniel Newman: We should sort of explain. No, we should test that.
Patrick Moorhead: That was the next question.
Daniel Newman: If we had a lab, we should test that.
Patrick Moorhead: Totally, yeah we should.
Daniel Newman: If we had a lab. If we had a lab. So Chetan, talk a little bit about Trainium 2 and the advancements there. ‘Cause it was a really big moment.
Chetan Kapoor: Yeah, it is. And it has been a journey. So we talked about how we have been investing in silicon. So just taking a step back. So we started building our first chips like 10 years ago. Like Nitro is built, it’s on generation five.
Patrick Moorhead: And what year was that again? Just for our listeners?
Chetan Kapoor: So I joined in 2016, so it was two years, two or three years prior to that. So we’re talking about circa 2014-ish timeframe, right?
Patrick Moorhead: So nearly a decade.
Chetan Kapoor: Yeah, nearly a decade.
Daniel Newman: I was in high school.
Chetan Kapoor: Yeah, we’re all dating ourselves.
Daniel Newman: That’s not true though. Yeah, I think I had three grandkids by then. This age spread’s going to get worse. Go ahead.
Chetan Kapoor: Yeah, I was going to say, so we started building Nitro chips 10 years ago, and we are on fifth generation now. And what we were building as part of Nitro was very general purpose processors. They’re obviously ARM-based, but again, there was a lot of IP that we were adding to them. And that led to Graviton 1. And then when I started with AWS, this is 2016, 2017 itself, our business in deep learning itself was quite substantial. And we were like, we can easily see where this trend is going. And we were like, okay, well it’s going to take time, so let’s get started now. So 2018, we actually kicked off our development, announced our first chip in 2019, shipped it in 2020, and now we are actually on a third architecture with Trainium 2. And it is a tremendous product. So 4x higher performance than Trainium 1. And by the way, wink wink, I think it could be even higher. So at least what we are claiming, positioning right now is 4x higher. It is going to be deployed in extremely large clusters, all interconnected VFA. We’re talking about like hundred thousand chips with hundreds of megawatts of power like clustered in a data center. And they’re all going to be interconnected via EFA. And then from a power efficiency perspective, what we’re seeing right now, it’s going to be twice as power efficient as Trainium 1. So you can use that efficiency two ways: with the same power budget, you can actually get more compute.
Patrick Moorhead: So is that like watts per token?
Chetan Kapoor: Yeah, watts per throughput. Yeah, watts per token for data trained. Watts per teraflops also could be another metric. Or if you have a certain performance level, you’ll just see half the amount of power that the consumer. So it’s super, super important. I think 2014 is going to be a year where a lot of folks are going to wake up and say, “Yeah, we have been racing in this GenAI race to actually get ahead and start grabbing mindshare, but everybody is going to become more and more conscious about their carbon footprint and things like that. So Trainium 2 is going to be a really big deal on that front. And then we talked about a strategic engagement with Anthropic about a month ago. And Anthropic has committed to actually leveraging Trainium 2 for training the next generation foundational model. So again, super excited about that. It’s going to build on the groundwork that we’ve been laying over the last five years with Inferentia 1, Trainium 1, and now Trainium 2.
Patrick Moorhead: Yeah, I’d love to see you surface the energy piece. And again, big accounts you’ve got a lot of meetings with and stuff like that. But some sort of a way where I can self-select in a way which says, okay, I can… Well, I did hear it’s the highest performance for training. So it’s going to be tricky.
Daniel Newman: And lowest power consumption.
Patrick Moorhead: Yeah, and lowest power.
Daniel Newman: Treat it like serverless without a server.
Patrick Moorhead: And just so I understand, so I can address this through Bedrock, I can address this directly if I’m a real-
Chetan Kapoor: Yeah. Straight up PC, too.
Patrick Moorhead: Okay, like some of your bigger customers. Can I address this through SageMaker or this wouldn’t be good for machine learning?
Chetan Kapoor: No, it will be, yes. The SageMaker support will be there. So it all just depends upon get it where the customer wants to start. So for large customers, they have their own management layer that they have built internally, and they just want access to raw compute. They’re like, okay, just give me hordes of-
Patrick Moorhead: Right, they have the teams, the data scientists, AI science, the whole bit.
Chetan Kapoor: They have the whole bit, and they just mainly want high performance, cost-effective infrastructure from us that is actually also reliable. Reliability is a huge thing that is top of mind for a lot of these customers. And then there’ll be other customers like the small to medium enterprise that they’re like, you know what, I don’t have that level of investment, I don’t have these people. I just need a managed solution. This is where SageMaker comes in, this is where Bedrock comes in, where the infrastructure for the most part is abstracted away, and you’re just dealing with either training jobs or inferencing jobs or fine-tuning or whatever that is. And yes, it’s all going to run on EC2 hardware, but it’s going to be abstracted by some of these managed services.
Daniel Newman: Chetan, yesterday I met with the CEO of Local Measure. They’re a company that I’ve invested and advised and they’re actually your APJ marketplace partner of the year. And I just want to give you guys a little on-video testimonial here. But they actually said that they moved to Bedrock and they’ve been able to stand up generative AI. They have a CX platform literally standing up and deploying generative AI in their product in just a few months.
Chetan Kapoor: Nice.
Daniel Newman: And jumped ahead. And by the way, AWS, they’ve built a really great partnership. But kind of exactly what you said though, and Pat what you were talking about, kind of the easy accessibility. They’re not a giant company yet, but the ability to use these tools and then get access to that raw compute and use Bedrock. And it goes back to the first tweet I sent before I came to AWS was a tweet that basically said, “I think this is going to be the week that all the misconceptions about AWS are going to be answered.” When I say misconceptions, I get to say this as an analyst. People said, “AWS is behind, they’re behind on AI.” And this was a consumer thing. This was because on the consumer side, you didn’t have a ChatGPT-
Chetan Kapoor: Or a search engine.
Daniel Newman: Or a search story. And what I said, and I know you’ve been very much Pat on the same train, was the ability for the world to see right now when it comes to enterprise AI, but really enterprise AI for consumer, all the solutions are here. And with what you’ve done over the last decade with silicon is huge.
Patrick Moorhead: I know we’ve got a few more minutes left, but I get a lot of questions about how, okay. I have been a buyer of silicon at the largest infrastructure company at the time. I was in the chip space myself, and I’m analyzing it now. But how do you do, with, I get the sense that you don’t have as many resources.
Chetan Kapoor: We don’t. Yeah.
Patrick Moorhead: No, I mean not even close.
Daniel Newman: End up here now.
Patrick Moorhead: No, we did. It should be small shop. I got to tell you, when again, you don’t know exactly how many people they have. No, this is a compliment. So get ready for it. No, no. They’re wondering how you do it, not how do you do it. Almost like a reason to believe are these chips as good as this? I mean you’ve got that hundreds and thousands of people across, and then across, and it’s like, okay. We know that they’re not just based on certain metrics and even LinkedIn searches. And it’s like you can figure these things out, but how do you do this?
Chetan Kapoor: So Pat, we do run pretty lean across all of AWS, and especially in the entrepreneur organization that is building these ships, right? So the good thing is that team is staffed with seasoned experts, folks that have on a per person base 20, 25 years of industry experience. But generally speaking, that team is very, very mindful of making sure they pick the right talent with the right capabilities, and only build what they really need to build. When it comes to chip design-
Patrick Moorhead: The conversation I had last night with one of your architects, which was, “Hey, this is how much it would cost to do this custom thing. And this is how much differentiation we could get. And this is why we chose this.”
Chetan Kapoor: Correct.
Patrick Moorhead: Yeah. I mean, very pragmatic.
Chetan Kapoor: I’ll give you very specific examples. So on a chip you need a PCIE controller and an engine, right? You need a memory controller to talk to a DDR or HBM memory. There’s not a whole lot of differentiation for us to actually build those pieces ourselves. So at the silicon level, we go partner with some company that is providing a great PCIE controller, and we’ll package it.
Patrick Moorhead: I don’t know of a single company who doesn’t license IP.
Chetan Kapoor: Yeah, exactly. Right. So it’s a balance of making sure that we are focused on the differentiation even within the silicon that we want to provide. So in the case of Trainium, it’s going to be those neuron cores that we have built, these massive matrix multiply engines that not only handle matrices, but there’s also aspects of doing scaler and vector operations. That’s where we are going to focus our energy on. Not like, okay, you know what, we got to invent the whole enchilada. So even within the chip design, there’s a lot of leverage there. The team is lean focused, specifically we don’t distract them with building chips for computers or laptops. We are targeted for like, okay, it’s a data center deployment. We have decent understanding of the workloads we’re going after. We understand the kind of reliability and availability features we need to add and things like that. So it’s a combination of being focused in the kind of market and workloads we’re targeting. Plus it’s also being pragmatic about what we build versus what we buy.
Patrick Moorhead: I did notice, by the way, thank you for opening up your lab in Austin.
Chetan Kapoor: Oh yeah, that was great.
Patrick Moorhead: I appreciate it. And a couple things again, and even how you did system test on wafers, some of the interesting things you done on functional testing for people to dial in and phone home and connect to the network. I had never seen anything like that before.
Chetan Kapoor: It’s a super scrappy lab, right? You’re not saying that.
Patrick Moorhead: And that really reinforced. And I used to work at a company that had a lot less resources than the next big company, of probably 10% resources. And the question was always how we could get that done with those resources. I feel like you’ve taken that to an entirely new level here, and it’s impressive. I think you guys need to get that message out a little bit stronger, harder.
Daniel Newman: Across the connotation of skunkworks. From it’s cool to it’s amazing. You’re on that kind of really it’s amazing side of what a skunkworks team can do to develop. And it was really mind-blowing. And again, I think Chetan we’re just excited to continue to follow the progress that you’re making. We will continue to challenge the status quo, and we’ll ask the questions here. And I appreciate you, and I think I can speak on your behalf too, Pat. We appreciate you coming in here and being so straightforward and taking on the tough questions with us.
Chetan Kapoor: No, absolutely. Really appreciate the opportunity. It’s been a very busy last few years at AWS, at the infrastructure level. We think that’s one of our core differentiations. Not a lot of people get it, but people who do understand like, okay, this has a lot of value for our customers and also processes.
Daniel Newman: As you run to a hundred billion a year of revenue, I don’t think it’s going to slow down anytime soon. But Chetan, let’s do this again sometime. Let’s have you back.
Chetan Kapoor: Absolutely. Thank you.
Patrick Moorhead: Thanks.
Chetan Kapoor: Really appreciate you guys having me.
Patrick Moorhead: Thanks.
Daniel Newman: All right, everybody, hit that subscribe button. Join us for all of our coverage here at AWS re:Invent 2023. We are in the future frequency booth right here outside the expo hall. And we brought you hopefully a whole bunch of great conversations. But for this one, for Patrick Moorhead and myself, it’s time to say goodbye. See you all later.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.