On this episode of the Six Five Webcast – Infrastructure Matters, hosts Camberley Bates, Keith Townsend, and Dion Hinchcliffe dive into the highlights and key takeaways from the recent GTC Conference. Topics include NVIDIA’s strategies and dominance in the AI sector, the ongoing challenges of AI infrastructure, and the intriguing concept of a token economy as a measuring tool for AI costs.
Their discussion covers:
- GTC Conference Highlights: An overview of the conference’s growth, significance, and key announcements such as the new Blackwell GPUs, showcasing NVIDIA’s continued leadership in infrastructure and AI technologies.
- NVIDIA’s Dominance and Strategy: Exploration of NVIDIA’s comprehensive AI hardware and software stack, including CUDA and CUDA-X libraries, and how this positions NVIDIA at the forefront of AI innovation.
- AI Infrastructure and Costs: The challenges posed by the rapid obsolescence of GPUs, the high costs associated with AI hardware, and the importance of efficient data management.
- Token Economy and AI Investment: Discussion on the concept of a token economy as a novel way to quantify AI costs and the current landscape of AI investment despite recent financial hurdles.
Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five Webcast – Infrastructure Matters is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Camberley Bates: Good morning, everyone, and welcome again to Infrastructure Matters, number 76. We’ve got the entire crew here today, Dion Hinchcliffe and Keith Townsend flying in from Chicago and Washington D.C. So welcome aboard, guys.
Keith Townsend: Hey.
Dion Hinchcliffe: Great to be here, Camberley.
Keith Townsend: Good to be here.
Camberley Bates: So this week, we had a little show that went on, GTC. I don’t even know what it stands for. I guess I don’t have the knowledge. I only have the knowledge of what it is. GTC’s NVIDIA’s big, huge show that they do, and it’s actually not in Vegas, surprisingly. It’s in San Jose and they held the keynote at SAP Arena, which I believe is the Sharks’ arena. Big, big splash, big event, et cetera, and we’re going to be covering all that and what it means for the infrastructure guys and maybe what it doesn’t mean. Right, guys?
Dion Hinchcliffe: Sure.
Keith Townsend: And I just looked it up so conveniently. It’s GPU Technology Conference, GTC.
Camberley Bates: Oh, gosh. That’s right.
Keith Townsend: I never knew that, that the… I guess that didn’t mean much a few years ago when before all of the AI hype, but now it does.
Dion Hinchcliffe: Right?
Keith Townsend: I’d like to say it’s a big show, little town. 25,000 people took over Downtown San Jose. They had a GTC park. We could buy a 5080, 5090 GeForce, a card if we wanted to. We got pushed front of line. The problem with that for me is that if I buy a $2,000 GPU, then I have to buy a $2,000 PC to put it in, so I didn’t have a just spare $4,000 to spend at one time.
Camberley Bates: Well, and then you’re going to have to spend another $4,000 on that lovely lady at home.
Dion Hinchcliffe: Well, there is that.
Keith Townsend: At least. At least another $4,000.
Dion Hinchcliffe: And that makes cloud subscriptions actually affordable, right? Because nowadays, my kids use geforcenow.com because they can always get the latest graphics card. It’s automatically upgraded. Here’s renting a PC to run your games, so you always have the latest card. It’s amazing. It works on an iPad, so we don’t need a high-spec client to do anything with it, and $6 a month, it’s very affordable. You know?
Keith Townsend: Yeah, it’s very… And I guess this goes into, we’ll get into this a little bit later on, the token economy, so I think it’s an indirect indication of what that means and whatnot, but I do want to spend a few minutes talking about the overall feel and culture of the show. If you’ve never been to a GTC, and I think the majority of us have not, 25,000 people, but it was 15,000 people last year and the conference has grown, so not a lot of folks in the industry have actually attended it. If you’ve gone to, I’d probably say, an early VMworld when VMworld was around 20, 25,000 people, that is the feel, if that’s a good point of reference for you. It is very much a infrastructure conference.
I was taken aback to the big names they had on there, the OEMs that you would expect, the Dell, HPE, Lenovo, DDN, VAST, et cetera. Our good friend Howard Marks spent a lot of time explaining basic storage concepts to these data scientists and AI experts, trying to… Camberley, a lot of the things that we’ve talked about over the past few weeks about why is it important to get the vector databases closer to your storage instances, that was interesting. But some of the busiest booths? The Foxconn booth, these NOT companies showing outcomes with NVIDIA hardware. It was an exceptionally interesting mix of folks at the show.
Dion Hinchcliffe: It’s still an insider conference very much, right, Keith? I mean, I have VPs and SVPs of IT that go, but this is really for the people who are building the next generation of AI in the cloud. Is that a fair summary?
Keith Townsend: Yeah, you’ll have a lot of folks like Foxconn, E&Y was there, Capital One was there. So there’s definitely content for that business-oriented audience who are thinking about outcomes, but this was mainly a geeks conference. If you don’t know what a vLLM is, maybe might want to skip this particular show. It is heavy on the tech.
Camberley Bates: And it seemed like that where this probably was only dominated by mostly either gamers, because that’s where the GPUs are from, using the GPUs, or the data scientists, it’s now expanded to it’s more of an infrastructure, and that’s why we’re covering it at Infrastructure Matters, an infrastructure show, similar to how we saw VMware massively expand. And I still remember walking onto the show, Vmworld, one time and texting my buds back home and going, “This is a storage show now. This is where they all showed up.” And that’s true for my segment of the world is that this is where all the storage dudes are showing up as well as, of course, the server-
Keith Townsend: Yeah, as it matures, I’m really interested to see how it goes. Lucid was there, Volvo with Polestar outside, Pebble, the trailer competition to an Airstream, the electric version of the Airstream. It’s priced about as much. I walked through it and it is a interesting mix of trying to get outcome-focused folks at the show in addition to infrastructure folks, but it is a infrastructure show. There’s no doubt about it. We’re talking about models, how to optimize robotics using Fortran and AI, et cetera. It is for the technologist.
Camberley Bates: Well, so let’s get onto what came out of there. And so the first thing I’d like to talk about a little bit is about the announcements that came out of there. There’s two kind of areas that we want to cover here. One is more the hardware piece of it. There’s directions that we’re having on the new GPUs that will be coming out, I believe at the end of next year, or is it the end of the following year? The new networking capabilities. So you want to cover that, Keith?
Keith Townsend: Yeah, I wish we had about an hour to go through all of the announcements. The network-
Camberley Bates: Well, he took two and a half hours to go through all the announcements though.
Keith Townsend: He took two and a half hours to go through all of the announcements. I sat down with a Cisco, just a highlight, I sat down with a principal Cisco engineer to really understand this Spectrum thing. And he said, “Keith, just think of how many problems in networking you could solve if you could just set bandwidth afire.” And I’m like, “Well, what do you mean?” He’s like, “We’ve spent all our careers as networks professionals trying to optimize, control the quality of service around delivering packets, and this scale-up and scale-out world of GPUs and clustering, you just throw bandwidth at the problem.” So the best way to describe the announcements coming out of GTC around networking, they’re just throwing all the bandwidth they can at HBM, high-bandwidth memory, to get your IO from your disk and your memory into the GPU across the cluster. Biggest, I think, movement.
Dion Hinchcliffe: That was how we optimized SQL databases in the old days. There were always, always IO mount, right? So if you want-
Keith Townsend: Yeah, that is no, that is… I got a really good sense of understanding why you can’t eke out more efficiency out of your GPUs. The IO and memory, it is the problem. Getting-
Dion Hinchcliffe: To eliminate fracturing, yep.
Keith Townsend: … data into the GPU is absolutely the problem, and then you have the overhead of the actual protocol stack. Then the big news was obviously Blackwell is starting to come up on volume. Basically 40x the performance of a H100 at the same power envelope is the math that Jensen would like you to walk away with.
Dion Hinchcliffe: That’s a little bit better than Moore’s law.
Keith Townsend: That’s absolutely better than Moore’s law, and it creates a forced obsolescence of your GPUs. He made the statement… I don’t know if we talked about this last week or a while back ago, but we talked about whether or not there’s a… Somewhere, I talked about the resale value of a H100. And Jensen said, “You know what? You’d be lucky if you can give away a H100.” He said, “Still buy them if you want them,” he’ll still sell your H100, but it kind of doesn’t make sense, except when you-
Dion Hinchcliffe: Well, this reminds me of the crypto mining days that were just here three, four years ago, and which really, that was going to… NVIDIA’s, one of their biggest categories back then was not AI but crypto. And you get two years on your chips and then you literally couldn’t give it away. Yeah, after that, it was the price point because it was so hyper-competitive that the hardware have almost zero value after two years, and I think we’re still on that cycle.
Keith Townsend: Yeah, I think we’re even worse. It’s six to 18 months-
Dion Hinchcliffe: Wow.
Keith Townsend: … is the number that that I’m getting quoted constantly. So enterprises, so what does this mean for the enterprise? Dion and Camberley, you both know this. You can justify, I think my argument is that from a ROI, you can justify the term, like if you’re really… If it is a $6 trillion opportunity, as you mentioned earlier in the earlier podcast, Dion, then the money, the investment is there. The challenge is that the talent and operations is not.
Dion Hinchcliffe: Well, and the question is that that’s a really risky proposition for the enterprise, which wants to sweat their assets. The average CIO wants to sweat their assets four to seven years. They’re not going to want to take on that kind of investment risk. This is why cloud AI is going to become a big deal. As much as everyone wants to do private AI for the control and make sure they don’t lose their data, the real issue is going to be around this cost because the CFO is going to say, “If I can’t advertise it over four years, what are we doing? It’s crazy.”
Camberley Bates: Well, you even mentioned, and I’ll bring this up here since we’re talking about design obsolescence and costs, is that you mentioned that AWS is selling Trainium at 25% of the cost of an H100. I don’t know the numbers in terms of performance or anything of Trainium versus the H100 versus the Blackwell, but that, to me, sounds like it’s another statement of saying, “Okay, so here’s some cheap, cheap stuff that you could use to train.”
Dion Hinchcliffe: Well, they need to do that, I mean, to get over the switching costs because developers don’t want to leave CUDA. That is NVIDIA’s big, deep, dark secret that people don’t, in the industry, don’t seem to understand is that CUDA has 90% market share for a reason and all the rest share that 10% because of the CUDA lock-in. And so AWS is now realizing if they want to switch and get access to those high-value AI workloads and actually use their Trainium chips, they have to give an economic opportunity that’s so big that it might cover the switching costs, although the switching costs are famously between eight and 10x, not 4x.
Camberley Bates: So let’s-
Keith Townsend: Yeah. Well, I actually met with… I had two really interesting engagements. Monday was the unconference, it was Beyond CUDA. So put on by AMD and TrustWave. I mean, I’m sorry, TensorWave Cloud. And then I actually talked with AWS and there is a good amount of still, I think, uncertainty. The market is still really, really, really, really young. The training is done. NVIDIA owns training without a doubt, but a single Blackwell chip will accommodate most AI needs for a 100,000-user organization.
So H100, a Gaudi, an Intel Gaudi 3, a AMD MI300, even with the CUDA, even with the CUDA tax, you’re going to space out AI to where their data is at, and those chips for most enterprises are fine. So from our research, you’re seeing about the same performance, especially if you code to, Dion, to your point, you get past the CUDA mode if you code directly towards independent… I’m sorry, it’s not directly. If you just don’t use CUDA, you’re going to get better performance. So the problem isn’t CUDA compatibility. It’s just getting your developers not to use CUDA, which is easier said than done, but the opportunity-
Dion Hinchcliffe: Well, for developers who say that they can’t run their code on NVIDIA, it makes them feel real, real uncertain you’re doing the right thing. That’s the problem, right?
Keith Townsend: Yeah. And there’s compilers and there’s translators and you can run the code on NVIDIA. Our good friends at DeepSearch proved that you don’t have to use CUDA to run your code on NVIDIA.
Dion Hinchcliffe: No, yeah, no. And you can switch more easily than ever. It’s just getting developers to buy into that is the tough part.
Camberley Bates: So one of the announcements that was made this week was, let me see if I get it right because I was just pulling it up, these libraries that they have, these are frameworks for all types of work. Everything from quantum chemistry, quantum computing, I’m just reading, deep learning, computer-aided engineering, data science processing that are called CUDA-X Libraries. And that, to me, when I hear us talking about, and I also listen to the Signal65 team that’s doing a lot of AI testing here, they talk about how the CUDA, or these libraries, the capabilities and the software that NVIDIA is putting out, unless AMD and Intel and these other people that are developing GPUs develop a similar kind of library, it’s almost a moat that they’ve built around their business to be able to have you because the speed-
Dion Hinchcliffe: Yeah, the CUDA-X. Yep.
Camberley Bates: … those libraries or what they’re intending to do is speed that process of training and then getting the systems out the door, so it’s-
Keith Townsend: Yeah, NVIDIA has invested in the CUDA since the start, for well over a decade, so it’s a very mature framework. It’s not just drivers, to your point, Camberley, that it’s a framework, there’s libraries, it is a robust environment. They made several announcements. There was a CUDA, a CUDS, which is, I can’t remember if it’s a service or a different level of… It’s definitely a different level of capability, but it’s not CUDA. So we plumb it all to say CUDA, but it’s not just CUDA. It is everything that they’ve done to accelerate AI development.
Dion Hinchcliffe: Well, and they’ve really become the AI company. The only company that has a full hardware software stack around AI that’s mature is NVIDIA, and then now they have all these different flavors for every industry. So they have industry-specific offerings for everything from healthcare to finance to manufacturing. I mean, they are running up field faster than everyone else, and they’re a one-stop shop. Although their models aren’t necessarily, you don’t see them on the leaderboards, the thing is they can bring together everything and make it work for your enterprise. And they really have the enterprise focus, which I think is very interesting. And they’re increasingly the organization to beat for all AI, not just the chips.
Camberley Bates: Yeah, and which is why we saw this has become not just an NVIDIA show. This is the show, I mean, where everybody comes in terms of if you’re doing anything with AI, this is the place to be. If you weren’t here, then you’re not playing in the game, and also why Jensen announced a whole lot of… He’s having to play with everybody nicely. So I mean, there was this frame-
Dion Hinchcliffe: While not giving them enough chips that they want, right?
Camberley Bates: Yeah. So there’s this framework that they announced on the data storage side of the house, which is basically putting a reference architecture to say, “If you want to play with this, you need to check all these boxes and play with this nicely, everything from GPU to REC to using our NICs and our networking capability.” Everybody lined up and said, “Yes, sir. Yes, sir. We’re going to do it all.” And the other piece that I thought was pretty amazing, there was a line that came out from Seeking Alpha, which is one of the things I read. Foxconn came out to say that this year, their server revenue will exceed their iPhone revenue.
Dion Hinchcliffe: Yeah. That tells you something right there, huh?
Camberley Bates: That is big.
Keith Townsend: Yeah, I had no idea Foxconn made servers until I went to their booth, and then there was just amazing, and every one of the manufacturers had next-gen demos of systems running Blackwell and visions of Rubin, which was announced. We can’t miss that Rubin was announced. I couldn’t help but think that a bunch of Dell E owners are going to start getting weird orders for Rubin spelt the wrong way, so it is going to be an amazing point of innovation.
I think bringing up names like Foxconn and servers that, I’ve been a server guy almost my entire career, I had no idea if Foxconn made servers.
Camberley Bates: I don’t think they sell directly. They sell through everybody else. So they’re making, Foxconn is the manufacturer-
Keith Townsend: I’m sure they’re white boxes, but I didn’t know they put together… Well, I guess I should have. I knew they did PC, so why not servers?
Camberley Bates: So Dion, what we’re going to have to see you do a, what is it, a #CIOInsights chat that you do on Twitter?
Dion Hinchcliffe: Mm-hmm.
Camberley Bates: So one of those things might be talking about what are you going to do about your servers if they’re talking about you’re having to rev them every six months?
Dion Hinchcliffe: Yeah, it’ll be interesting. AI infrastructure term would be an interesting topic. Yeah, we’ll see if we can take that up. And by the way, it’s not just on X. We have it on Bluesky now as well because a lot of IT people have moved there.
Camberley Bates: Okay. Well, we’ll have to there. Okay, so let’s go on to the next one. You were going to talk about, I believe, some Mistral or also some-
Dion Hinchcliffe: Yeah, yeah. Mistral, which is, we haven’t heard too much from recently, just announced a new small model. Now, small models are important because they can process things both cheaply, so you reduce the cost of AI. If your task doesn’t need a medium or large model, you should use a small model. They’ve offered a new highly competitive small model called Small 3.1, so the name tells you what it is. It significantly outperforms all of the models. And this is important because AI-
Camberley Bates: Can I raise my hand on the naming?
Dion Hinchcliffe: Yeah.
Camberley Bates: I want to say thank you to the marketing people for calling it what it is as opposed to-
Dion Hinchcliffe: Just nailed this part, yeah.
Camberley Bates: … some crazy name that we don’t have any idea what you’re talking about. Thank you. Okay, go on.
Dion Hinchcliffe: Yeah, no, I agree with that. We’re going to see models everywhere. Every copy of Chrome has Google’s Nano model. Your iPhone now has Apple’s Apple Intelligence model. Everything’s getting a model embedded, and when it’s local and you can run it using a CPU you’ve already paid for, that’s a win all the way around, and you have it’s safer and securer. Small models can be arguably safer and more secure, more cost-efficient. The whole thing, runs faster because it’s local, all that sort of thing, and it’s great for assistants. So this puts Mistral back in the game and it’s great to see.
Camberley Bates: So you were also, is this related to the agentic announcements that came out that you were going to talk about, so the agents?
Dion Hinchcliffe: Oh, yeah, sorry. Speaking of agents, NVIDIA also had agent news this week, and as I think both you know, we released a large market overview of all the major enterprise-class agent solutions. NVIDIA was not on it, but they now have a solution, AgentIQ, that you can argue, “Well, does the world need another agent solution?” Well, with it likely being a $4 trillion economic value prop by the end of the decade, no one wants to miss out on that growth story. But NVIDIA has a unique position on it. They want to be the agent ecosystem. So they play well with all the other hot agent frameworks, open source and commercial. So CrewAI, LangGraph, Llama Stack, Microsoft Azure Agent Service, Letta. They play well with all of those because you’re going to be multi-agent, just like every organization that I work with is multi-cloud. They have two, three, four, five different cloud providers for IaaS and storage and all of that, just because often through acquisitions and whatnot.
But they’re saying, “You’re going to have tons of flavors of agents. What if we can give you one consistent data fabric for those? What if we could get…” And they did that, announced that with AIQ now works with their AgentIQ to provide a standard infrastructure for multimodal agents. And so they’re going also the full multimodal route. So you can build agents that can do visual perception, speech translation, all sorts of things, understand using their world model. And the other thing is they’re going to compete on explainability and auditability. So there’s a profiler tool that’ll allow you to have the agents explain why they’re doing what they’re doing, how they did it, what steps they took, what they considered.
So they’re going to say, “You’ll also be able to understand why our agents use it,” and they work with all those other flavors of agents in multi-agent situations. So if an agent’s already available in another local environment that they can do something, it can discover that agent and then invoke it. So they’re going to be the split shielding of agents, be in the center. Where they argue this is, “You need something that’s going to bring all your agent frameworks together? We can do that.” So interesting to watch.
Camberley Bates: Which is… Okay, so I also saw the other trend is they announced on something that I’m coining as a AI OS, kind of trying to pull the pieces together, digging more into what that means about everything that you have to do in order to deploy the pieces of it. So it sounds like where a direction that they’re going is not only the physical device, the GPU, and keep on pushing on that technology, but it’s all the software management tools that go around those pieces, almost like you’re looking at a mainframe. Did I say that?
Dion Hinchcliffe: I heard that.
Camberley Bates: All the pieces.
Keith Townsend: You know, this was an interesting shot across-
Dion Hinchcliffe: Yeah, they don’t..
Keith Townsend: This is an interesting shot across the bow to Broadcom and VMware. He paid homage to that. They’re a big VMware customer and they still use VMware solutions, but this is an alternative to VMware private cloud, private AI, which VMware is very much and Broadcom is very much interested in being that AI OS for the enterprise. He talked a lot about the AI factory and how a one-megawatt AI factory is a small AI factory.
Dion Hinchcliffe: Very small.
Keith Townsend: Yeah. If you’re thinking about a 600-kilowatt rack, which is 60% of a megawatt, easy math, sure, yeah, that’s a small AI factory, but let’s be honest, that’s not where most of the AI is going to be done. And enterprises are going to need help with workload optimization, et cetera, et cetera. It’s, again, fascinating start to just the challenges. One of the obvious challenges or not-so-obvious challenges is the challenge with model observation. A huge challenge that we discussed within the technical community is how do you manage flow from model to model or even within the same model? So if I’m outsourcing my AI to a OpenAI and it’s changing every day, how do I ensure that the prompts that I’m putting into OpenAI, I’m getting consistent prompts out? It makes change management and observability exceptionally difficult. There’s not really great solutions on the market yet for that problem.
Dion Hinchcliffe: Yeah, there’s another bandwidth problem, but anyway. Well, Keith, is this… How does all of this, an AI operating system and now we’re going to have these massive agent brokers, how does this all take us to the discussion on the token economy? I want to make sure we talk about that.
Keith Townsend: Yeah. So Jensen floated this idea, I think, last year, this token economy, that tokens in, tokens out. A token is the base thing that a GPU inputs and outputs. So you take data that’s converted or vectorized into tokens. Basically, a token is about three-quarters on average of the average word. He gave a really great breakdown of how THE can be the theory and a bunch of other different variations. It can be the beginning of that. You put it into a GPU via your model and you get a number of tokens out. So the idea is that the token economy is how enterprises and customers are going to measure the cost of AI.
Dion Hinchcliffe: Yeah, how much they pay for tokens versus how much they get to charge or charge back, the token consumption, that sort of thing.
Keith Townsend: Exactly. And a challenge with this theory is that your Mistral 3.1 is a great example. Not all tokens are equal. I get different performance for different types of workloads and models. So if I get a billion tokens a day, well, what does that mean for genomics research versus image or recognition for security cams versus putting in prompts or a AI agent, et cetera? And then when I talk to the hyperscalers and how they’re engaging with their audience, the Bedrocks of the world or the Geminis, customers are not really, well, maybe less with Gemini, but Bedrock, which is a completely abstracted AI service, customers aren’t asking for being measured on tokens. And then when you go all the way down to Inferum, which is the inferencing chip for AWS’s accelerated compute, customers are not dealing with tokens per second as how they measure. They just want raw compute. So I think it’s an interesting way to look at it. I’m not sure if enterprises are ready to say that they’re going to consume based on a number of tokens because the market is just moving too fast.
Camberley Bates: Is there a direct… So is there any kind of direct relationship between tokens and the GPU processing time?
Keith Townsend: There’s benchmarks. So when you say, “I’m getting a thousand tokens per second,” let’s just take an easy number, “a thousand tokens per second with H100,” and then the Blackwell chip can do 40, I mean, 4,000, or 40,000 tokens per second, well, that had to have been measured based on what? What was the input? What was the… You’re baselining that tokens per second on what? Mystery..
Dion Hinchcliffe: Or even what a token means in this case because the token can be an entire word or even an entire sentence if you wanted it to be, right?
Keith Townsend: Yeah, that would be-
Camberley Bates: Well, and this would seem to be like I would be charging you per transaction, and we don’t do that either today. We charge-
Keith Townsend: Yeah, SAP tried this with, what was the… They had a SOPS or something. They had a measure that really didn’t mean anything. Oh, SAPS per second, SAPS, which is basically a way to get transactions per second, but it really, from industry to industry, from workload to workload, it really didn’t translate. This is a really tough problem to try and measure how you’re going to measure outcomes and value and data in versus data out because it’s all just uneven.
Dion Hinchcliffe: Well, and it’s this quality metric that tokens per second doesn’t seem to cover, right? Because you might do a much lower tokens per second, but give much better answers, and so how do you account for that?
Camberley Bates: Fast-
Dion Hinchcliffe: I think there’s a dimension in the triangle, in the iron triangle that’s missing here.
Keith Townsend: Yeah, and he gave a reasoning. He gave a reasoning model as an example. So older models use less tokens but come out with worse answers, so you add more tokens. And reasoning models might be, in some cases, in this case of DeepSeek, might be lighter on the demands of your GPU and your infrastructure, but because it’s a reasoning model, it’s going to use more tokens as you’re going in and out of the GPU constantly to use this reasoning technique so you end up with more tokens. So ah, you know, the token economy, a little early concept.
Camberley Bates: Six months from now, we’ll ask what happened to the token economy? So real quick, last thing out the door here is that we had a discussion on AI investments, and Dion, you did a calculation on something along these lines about what’s going on. We know it’s off the charts and hopefully all of our..
Dion Hinchcliffe: Well, it’s been counterintuitive and this is why I went out and did the calculation because recently, AI investment is falling. We’re talking venture capital investment. I’m sure there’s plenty of private equity and we know that the actual big-tech vendors themselves are investing hundreds of billions of dollars. But how about venture? How about the new AI pipeline with exciting new startups, things like that? That seemed to be almost dying on the run. How could that be if this is the biggest conversation?
I think it was a cycle situation because what we’re seeing is now there has been a flurry of announcements since the beginning of the year. If you project out, I did some statistical modeling in that link that I shared with you, it looks like it’s going to be the biggest investment year ever in AI by a large margin. That’s not just my calculation. I was revalidating what I had seen out there. So I collected my own numbers, did my own statistical modeling, and came up with a similar answer. It’s going to be the big year for a AI investment.
Camberley Bates: So if you want to see that number, it is up. Go follow Dion on X. I’m assuming it’s on Bluesky as well.
Dion Hinchcliffe: Yep. As well, yep.
Camberley Bates: What he sent over to me was on X. And as I said earlier, he’s got this really cool thing that he does for the CIOs, which is a chat, live chat situation that he starts up on X and Bluesky at different times.
Dion Hinchcliffe: 8:00 PM Eastern Time every week on Thursday. #CIOchat is hashtag to find the questions.
Camberley Bates: So you can pay $5 into my jar for that advertising for you.
Dion Hinchcliffe: There you go. Thanks, Camberley.
Camberley Bates: All right, guys, thank you very much. It’s been a pleasure. I look forward to this recording every week because wow, I learn so much.
Dion Hinchcliffe: Yeah, we can catch up on a lot. Yeah, it’s great.
Camberley Bates: We really… It’s very, very educational. And even the guy that, our guy that manages this entire thing is listening to us, he doesn’t just sit back there and drink coffee.
Dion Hinchcliffe: Exactly.
Camberley Bates: All right. Thank you very much and have a good week. Don’t forget to follow, like, share, et cetera. Bye.
Author Information
Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.
Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.
Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.
She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.
Dion Hinchcliffe is a distinguished thought leader, IT expert, and enterprise architect, celebrated for his strategic advisory with Fortune 500 and Global 2000 companies. With over 25 years of experience, Dion works with the leadership teams of top enterprises, as well as leading tech companies, in bridging the gap between business and technology, focusing on enterprise AI, IT management, cloud computing, and digital business. He is a sought-after keynote speaker, industry analyst, and author, known for his insightful and in-depth contributions to digital strategy, IT topics, and digital transformation. Dion’s influence is particularly notable in the CIO community, where he engages actively with CIO roundtables and has been ranked numerous times as one of the top global influencers of Chief Information Officers. He also serves as an executive fellow at the SDA Bocconi Center for Digital Strategies.
Keith Townsend is a technology management consultant with more than 20 years of related experience in designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and an MS in information technology from DePaul University. He is the President of the CTO Advisor, part of The Futurum Group.