The Futurum Group's Statement on Israel

How Intel is Driving the Evolution of AI with Intel’s Wei Li – Futurum Tech Webcast Interview Series

On this episode of the Futurum Tech Webcast – Interview Series I am joined by Wei Li, VP, General Manager for Machine Learning Software for Intel. Wei spearheads all aspects of AI software product development for deep learning, statistical machine learning and big data analytics, as well as hardware co-design for AI acceleration on CPU, GPU, and XPU architectures.

Our discussion centered on Intel’s leadership in the AI space following the 3rd Generation Xeon Scalable Processor Launch. It was an excellent conversation and one you don’t want to miss.

The Evolution of AI

My conversation with Wei also revolved around the following:

  • An exploration into the democratization and evolution of AI in the last few years
  • How Intel is approaching AI to make it accessible to meet the shifting needs of businesses
  • The rapid proliferation beyond the data center including the growth at the edge
  • What Intel is doing to enable faster development and deployment of AI software at scale
  • How real-world organizations are leaning on Intel to drive their AI journeys

AI and machine learning are, as Wei said, where the magic happens. These ever-evolving technologies will be at the forefront of advancements for years to come. If you’d like to learn more about how Intel is moving the AI space forward, be sure to check out their website. And while you’re at it be sure to hit the subscribe button so you never miss an episode of the podcast.

Watch my interview with Wei here:

Or listen to my interview with Wei on your favorite streaming platform here:

Disclaimer: The Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

More insights from Futurum Research:

Intel Must Execute Its New Strategy Perfectly To Win Back Investors 

Intel 3rd Generation Xeon Scalable Launch: Flexibility Meets Performance

Intel Nabs Justin Long For New Campaign In Response To Apple’s John Hodgman Spots – Futurum Tech Webcast


Daniel Newman: Hi, everybody, welcome to the Futurum Tech Podcast. I’m your host, Daniel Newman, Principal Analyst and Founding Partner at Futurum Research. I am very excited about this Futurum Tech Podcast Interview Series that we’re having today, following the third generation Xeon, scalable processor launch from Intel. We’re going to have an interview with Wei Li from Intel. And I’ll have him on in just a moment, we’re going to really focus this on the launch, but really talking a lot about AI ML and thought leadership in that particular area, because that is where Wei spends his time. I’ll let him tell you more about that in a moment.

These interviews are always really fascinating. I hope you can take the time, we’ll spend about 20 minutes with him, it’s going to be a great conversation. Now, quick disclaimer, I do have to let everybody know that this show is for information and entertainment purposes only. And while we will be talking to executives from publicly traded companies, about their businesses, please do not take anything we say as investment advice. Alright, without further ado, Wei, welcome to the Futurum Tech Podcast Interview Series. Thanks for joining me today.

Wei Li: I’m very glad to be here. Thank you for inviting me.

Daniel Newman: Yeah, it’s great to have you. So many of your peers have graced our show. Over the past several years, we’ve had experts on AI and Edge 5G. And of course, a big topic is AI and ML. And I know that’s an area you spend a lot of time on. But why don’t I not spoil your introduction and give you a chance to introduce yourself? Why don’t you tell everybody a little bit about the work you do at Intel.

Wei Li: So I’m Vice President of Machine Learning Performance. And what that means is I lead an AI software team. And actually, I have to say, I lead a group of magicians, and they actually do magic for AI, you know, they can, they are very innovative. And they take on aggressive goals take aggressive schedule, and they will hit it. So it’s amazing, you know, to work with these guys. Our role is to make AI run fast. As you know, AI has been a lot of compute intensive, I’ve used a lot of data, and so on. So making it run fast is actually critical. And in all my career, I’ve been in this business of making computers run fast, even ever since I was a grad PhD student, many years ago now. And I worked on making supercomputers run fast. And since I joined Intel, I work on getting data center and cloud servers run fast as well as smartphones and tablets. So in the last five years old, I have a team to work on making AI run fast, and it has been an amazing experience. You know, I worked as I said in the business for a long time, and AI has been probably one of the most challenging and also most rewarding experience. Just look at what we’ve done. I mean, you know, the software acquisition we have to have done if you download our software, you can get to up to 100x performance gain, you know, you know, who can who can get 100x easily, right? I mean, we actually give people for free, you don’t even pay for the 100x, you just download our stuff.

Daniel Newman: No, that’s great. It just made me laugh a little bit. That is a really big game Wei. And that’s something that everybody’s paying a lot of attention to are these performance games. And of course, AI is really subjective. And I’m going to dig into that a little bit more with you in terms of you know, you’ve got the training end and you’ve got inference end, and you’ve got common workloads, you got the GPU end of things, which Intel is getting deeper and deeper in. And then you of course have, you know, what you launched yesterday and talked about in your third generation, which had a lot of enhancements generation over generation, but also against the competition, you guys in some of the more common workloads showed some really great gains.

So there’s a lot of positivity there. By the way, you know, I checked out your LinkedIn because that’s what people do these days. And you talked about being PhD grad student, everyone out there, though, Wei’s serious business. Cornell, I believe it was, yeah, graduate did your PhD there. And then you’re also you were in the valley there and I believe you at least part time adjunct teach over at Stanford, I’m sure a few people have heard of that school.

Wei Li: Yeah, it’s right here. Actually, I you know, its biking distance for me to Stanford.

Daniel Newman: Yeah. And it’s a lot of people dream of someday attending there. And if you ever get there for computer science, you might want to look up Wei and see if you can take one of his courses if he happens to be doing his adjunct thing at that time. So let’s hop into a little bit more about the AI. You know, it’s a hot topic right now. It’s becoming more democratized. People are experiencing it more in their everyday life. Talk about you know, in your experience, in the years you’ve been in this you had the evolution has to have been pretty magnificent, for lack of a better word from what you started to where we are today.

Wei Li: Yeah, it’s been amazing, right. So, you know, I more than five years ago, it was before I was seriously doing the AI software and we were working on our mobile devices and actually had a team, we actually developed a robot, a testing robot that can actually detect mobile application failures automatically. So it was using computer vision, you have a camera sitting on top of a mobile phone, you can actually detect when something fail, you can detect, right. So save a lot of time it used to be, we actually had to hire human testers to do this. And sometimes people think the human testers are fun, because, you know, you get paid to do game testing every day, but it was another fun job, you end up testing, you know, very same thing day in day out, right? So somebody even quit on the same day, right? So we developed this robot to do testing, and that, you know, save time, save effort, right?

Now, fast forward to today, we have seen things which is way beyond computer vision, right? We’ve seen people you know, when you and I go shopping, you know, wherever you go shopping, you’ll get a lot of product recommendation, right? You get the voice assistance, you get Smart Search, all of these things, these things are happening. And then also what’s happening is on the people side as I mentioned I had an engineering team working on this now in not only you know, the big cloud companies working on this, the enterprise of any kind people working on AI. And people are working on not only the business side of AI, but also consumers of the AI, where either teammate of mine, they want had some fun. So they worked on something called neuro style transfer. So they take Intel tensor flow and the running on Intel Xeon and I’m not sure you probably know neural style transfer. So you get two images you can take on the style of the other image, right? So imagine you get a photo, Daniel’s photo together with Picasso’s painting, right? You can merge these things, take out a Picasso style of Daniel, paint Daniel painting Daniel, right. So it’s a very cool thing.

And, more than that, you know, in I’ve seen high school students, right, so people I wouldn’t name names here, high school student, created a deep learning model to predict NCAA basketball tournament, so who would be the winner of NCAA tournament? And, he just took a one semester Python class in high school, and he learned Pytorch by himself. And he got data on the internet and then did the training on Intel laptop. So before you know it, he has a model, right? Yeah. So and he’s almost right, actually, in predicting this year’s winner he’s one step away from the team he predicted. Lastly, in the final, but it was very close, things are just happening as we speak. So AI is going to be everywhere. That’s what I believe. I think one of the things I’m just a true believer of is AI is going to be everywhere.

Daniel Newman: Yeah. It’s funny, you said all that, what the algorithm probably couldn’t predict. And you as a college professor, and someone who spent a lot of time on a campus understand is unlike professionals, there’s a little bit more of an erratic or inconsistent nature of amateurs. And what I’m saying is like the final game, I don’t know if you watched it, but one team came and played extremely well. And the other team almost looked off that on another given day, it might have been a much his model might have been right. Because there’s variables yeah, with, you know, college aged humans, that are not necessarily able to be monitored or managed to quite to that same level of perfection as something more binary.

Wei Li: Yeah, exactly.

Daniel Newman: But it was really interesting. And that’s a great example of how I have a lot of ground to cover with you. So I’m going to keep moving here. But what about competition so you know, Intel has had a really good few weeks, not that it hasn’t had a great history, but it had a rocky sort of few years, with some delays and things that sort of weighed on the company you had in your world. For instance, Nervana came, then Nervana went but then Habana came and now that seems to be very exciting. We’ve seen DL boost inside of Zeon continue with very, you know, with certain accelerated workloads has done very well. But in comparison to say how Intel has performed in server and notebook where it has really big market share in AI, it’s had to acquiesce a little more of its market share while it’s been you know, well guys, in your team and all these machine learning wizards as you call them, are working to build what’s going to be next. Talk to me about that, because you’re typically a category leader and right now, you’re an incumbent, but you’re not the incumbent. And you know, how is Intel kind of playing catch up and then planning to eventually lead you know, potentially in the AI space?

Wei Li: As you alluded to AI is a very competitive market, it’s a lot of players inside and we are not necessarily you know, leading everywhere. We do lead I believe in what I call you know, CPU AI, and it is you can you see, the CPU is everywhere, right? Because when we talk about AI everywhere, you know, it’s natural style with something which is already everywhere, which is that that is that the CPU and the CPU is very unique, you know. CPU AI, what I call is unlimited AI, you can run any workload, right? You can run not only AI workload, you can also run now AI workload, and even within AI workloads, you can run deep learning, machine learning, data analytics. And people need all kinds of different types of AI. So if that’s what you’re looking for, as an IT manager or as a developer, you need something that can do any everything you want, CPU is the right solution for you. And in the past several generations, we have been adding a lot of features you mentioned DL boost already, right? So because of these features, we’re adding, and because of the software optimizations were being we’ve been adding in the past several years. And I will say, you know, in the CPU AI, we are a clear lead performance was much better than other CPU option. Alternatives for AI.

Now, you know, just remind me, one of the benefits of a pandemic is I got a chance to do biking more frequently. So I started with a hybrid bike, I can go anywhere I want, I can go to go around town, I can go to the Bay, you know, not far away from here Around the San Francisco Bay you have a paved trail, you have gravel trail, and I take the bike to a mountain, and you can go anywhere you want, right? But on the other hand, I also have a race bike. So if you want to run as bike as fast as you can, I take the race bike but the race bike can only run on certain roads, and by having not gotten a mountain bike yet. And I probably won’t get an electric bike for a long time. Right? So it’s the same as Intel that we are investing in not only CPU, but we do believe CPU is the foundation, because people may have underestimated CPU. CPU is the foundation for AI. And then you know, then we also invest in other accelerators. We have a GPU, you know, and we also have Habana, you mentioned already, and actually, my team works on both CPU and GPU. So our job is also to make the AI software run, you know, as fast as possible on GPU as well.

Daniel Newman: Yeah, my assessment, by the way, was that, you know, there are always going to be those that live and die by the benchmark, you know. Flow points or open SSL, they’re going to look at a benchmark and say, this process or this one, you know, and I always say, you know, if you if you’re really leading with ecosystem, business requirements, digital transformation milliseconds on all the, the most high power required workloads are probably pretty insignificant. Meaning that most humans can’t even discern the difference between how quickly they got a data point from an SAP query using one chip or another, even if one’s much better, because you’re talking milliseconds. In most cases, it’s such a small fraction, but you are always going to have people that focus on that I tend to focus more on the business outcome. I always say Intel from a general purpose, understanding what all the things that CPU can do has a very good approach to partnering at the cloud, at the enterprise, at the Edge with the operators and as a very good approach in terms of the software partnerships. Like I mentioned, SAP and other workloads, where acceleration is important to make sure that those workloads have been designed to run optimally, because you are still have those parado aspects of a business. These are the things that companies are doing all the time. Let’s make sure those things we’ve optimized. And yeah, maybe there’s a certain type of high HPC requirement where a, you know, an A 100 from NVIDIA might be the right piece of hardware. And like you said, there’s mountain bikes, there’s racing, bikes, there’s hybrid bikes, there’s motorcycles, you know. Even when you get to motorcycles, there’s Harley’s and there’s racing bikes that are not the same. So I think that’s a great analogy. I want to talk a little bit more on that Edge to cloud use case, though, how is Intel approaching sort of that rapid proliferation? I heard this a lot yesterday, you know, you’re in the cloud, you’re in with the enterprise, you’re in with the Edge. Where does AI sit with that, though?

Wei Li: AI is going to be everywhere, right? AI is going to be in the cloud, AI is going to be in on Edge at the Edge. And the Edge, we do expect Edge AI will grow very significantly, right. So there’s some data, you know, says by 2025, three quarters AI will be at the Edge, right? I mean, which is very logical, because Edge is where data started, right? So it’s good, it’s natural to move your compute to the Edge right. Now, on the Edge side, it will be similar story, though, because, you know, SEO has been used at the Edge. And the same sort of tradeoffs people are making, which is, you know, the simplicity of the compute system here. And in the way they can run all kinds of workloads. And you may want to run all kinds of workloads also, right? And it’s easier for you to just start on a general purpose CPU, like, like, Zeon. And now, you know, it’s consuming at some point, you want more let’s say a power performance, right? You know, it’s like a race increase by example here, because quite often, it’s probably sufficient for you, right? Because, you know, like, we’re talking about milliseconds. On Zeon you can get down to millisecond range already, right? And you can solve your problem right. Now if you want to be more than a millisecond hey we have accelerators also that we have a vision accelerator, we have FPGA, we have all kinds of things, you know, we have with GPU, and in just a variety of things we can all do. And I think the combination of all of these will get us to be AI everywhere here.

Now, Edge is also very nascent, right. I mean, it is a software play as well. A software play and that’s why we’ve been working with partners. And we’ve been, you know, we have an Edge solution called Open VINO, which is very well designed for Edge inference inferencing. And it’s targeting multiple hardware targets all the way from CPU GPU to GPU accelerated FPGA, so people can move seamlessly. You can start from CPU without code change, you can move the FPGA to other end to VPU, if you want. So software plays a key role in terms of helping people make their decisions on hardware side.

Daniel Newman: Yeah, absolutely. And let’s build on that a little bit. The company made a lot of announcements last year, one API number of software, we all I think anybody that’s kind of tracking this space, agrees that foundational software layer is an enabler, meaning you can put all the horsepower on the planet into the silicon, but you know, you need to have that software that enables development to happen quickly. And ideally, very open for various environments, and people can learn it. Talk about Intel’s approach, what you’re doing is to expedite development create flexibility. And from a software standpoint.

Wei Li: Yeah so let me know this is dear to my heart, right? Because I run a software team here. So I want to go from top down here. Because I want to say from the developer perspective, what do you see, right? From a developer perspective, people take things like TensorFlow people think things like Pytorch, people take things like you know, psych, you learn XG boost these things, right? So our job is to make sure all of these things are well optimized on our Intel platforms. And luckily, we have actually have a very rich ecosystem. Historically, all these things are running on CPU, right. So naturally, we already have these things. And now the question how do we optimize better? So my focus on the software side is actually two parts. One is performance, and the other is productivity. So on the performance side, we want to make sure these are all well optimized and developed to what I call roofline hardware roofline, right. So we should go all the way to what hardware capabilities are, right? Because in my hardware friends always remind me, you know, I put in so many silicones there, you cannot leave them wasted, right? You have to get all the benefit from it. So that’s our goal, to get the best performance out of it. And we’ve been doing quite a bit of that. So, you know, in the Bay Area, wherever I would go, people are wanting to start a hardware AI accelerator companies, you know, you’d have probably have a lot of proposals to go for VC money and all that stuff. Right. But you know, for we do we have what I call AI software AI accelerator, right? An example I mentioned earlier, get 200x for free, right.

You can just download these Intel optimized TensorFlow and it’s like you learn all these things, you can get very significant in order magnitude performance gain here. So that’s the performance we are working on here. Now, the productivity side is also very important because as you know, AI is smart. But development of AI is actually not so smart. Because it’s amazing when I start looking at AI I love math theory and all that stuff. And so when I because I was expecting a very nicely theoretical background for AI and all these things, but what I saw was very experimental right. So this is sort of the, you know, the scientist in me looked at AI say, it’s not really science, but the engineering in me say it’s a good engineering. So we do a lot of engineering of AI, you have all these parameters, you have to you know, what people call hyper parameters, its trial and error, right? So how do we make it more efficient to get AI work done, and that’s another thing we were doing? So we’re building tools to do this, for example, we put in a DL boost integer eight operations inside hardware. So how do you go from a floating point 32 application to an integer eight model, that process usually takes days to do this, right. And in our competition, you know, if you do the on their platform, it takes in 1000 lines of code and a long time to do that stuff. So we have a tool, we actually, you know, can get from reduce from days to minutes, right, automation, that’s, you know, the power of automation. And inside a tool, we use AI also, so we sort of eat our own dog food, you know, use AI to do AI. So all these things are built on top of one API, you already mentioned one API, and then one API for the CPU strategy. One P API is critical, right? You want to create a programming abstraction to unify all these experiences, right for people, for people like our team, if we develop, you know, new AI software on top of this, it should be easy for us to move from one target to the other target and that’s a great idea.

I mean, and the extraction has two components to it, you know, one is the language side, you know, the DBC++ language. The other is a set of API’s for domain specific libraries. And then one DNN is Intel implementation for Intel platforms, but we also talking to other people, other companies, you know. We have seen other companies developed in one DNN on arm for example. Right. So it’s an open API, you know, people can implement on all kinds of targets. I think that’s good for the industry. I mean industry wide it is it is good for us to have something uniform consistent and make it all everyone’s life much easier. Right?

Daniel Newman: Absolutely. And with your announcements and your foundry ambitions, I guess it’s a little bit more okay than ever before to be a little bit pro arm or pro other chips, right? Because you guys may very well be involved in building and helping to manufacture those chips at scale in the coming year, so. But now, I think your point too, is as you know, it’s kind of that high tide rises all boats, right, as these different software frameworks get adopted, the use gets adopted, it grows, scale, more demand, more wafers, more chips, more volume, more used cases. Everybody wins, I think the disaggregation. And the SBU strategy is really great. Of course, we know 20%, sometimes even 30% of the CPUs, you know, resources get invested in things that are, you know, network infrastructure and data center infrastructure support, rather than, you know, application processing. So as you’re able to abstract more of those things away from the CPU, the CPU becomes a bigger force in doing the things that you’re talking about for accelerating workloads. So there’s a lot of really interesting things happening concurrently, you know. I’ve only got a few minutes left here, and I’ve really enjoyed this Wei, you’ve been a great guest. For everybody out there that’s listening, you know, it’s great to get these insights, you can definitely tell that if I let you Wei you could get really technical. Our audience a good mix of technical and business, I want to make sure we don’t get too far. But the way I always try to make things real for people is, is talking about use cases, you know, and so if you take a minute and just share a couple. I’d love to hear a couple of, you know, what you would consider to be real powerful use cases that you’re seeing all this work you’re doing coming to life.

Wei Li: Yeah. So I’ll just give you a few. I mean, I, you know, I can spend hours talking about all kinds of use cases. As I mentioned, things have moved quite fast, right? We change quite a bit in the past five years or so it’s I’ll just say, you know, let’s say, I’m not sure I’ll have time for all four use cases, because I know we have, we can go all the way from health care, right? Health care is a very interesting, everybody’s that we’ve been talking about getting applying AI in healthcare, and finance. Everybody cares about finance. Also the and in manufacturing. And the thing I care a lot is sports, right? So in all these areas, we have a use cases here.

Daniel Newman: Give me two talk about, you know, so on this let’s talk about two of them. And what I’ll do too is in my show notes, I’ll put some links to some other pieces of people can read about, but I thought you had a couple of really cool ones. I’d love to hear about the SGX on the financial fraud, because I think that’s a hot topic. And I think the sports ones cool, because everybody likes money and sports. Right? I mean, so.

Wei Li: Okay, let’s go for money first. So yeah, financial fraud as you know these are becoming, you know, there’s a way of life in the modern time. And so this company called Account Silence, it’s pretty good name Account Silent, they use AI to detect fraud, right. And now, as you know, inside the financial industry, you know, data is private. I mean, I wouldn’t want everybody to see my financial data and all that stuff, right. So it is a challenge to build a model on one hand, you want to leverage data from everybody else. And on the other hand, you know, you don’t want to share the data with the model, builder, right. So Intel, inside Intel hardware, we have a security feature called SGX stands for software guard extension, and we’re using that security feature that allows you to do something called federated learning. Federated learning means everyone contribute, but I’m not sharing data with others. So you get the best of both worlds. And with that, you know, they build a good AI model for this and we see the same capabilities used in the in healthcare as well. And you can imagine, you know, the healthcare data is also private, right? So that’s one example of using Zeon effectively. Now, the other one is sports. So, we have an Intel together with another company called Axis and they developed something called 3D athlete charging with that you can actually track athletes you know, and actually you can do the modeling and all that and you can find out get insights about the velocity, acceleration and you know, bio mechanics that when they were they’re sprinting, right, and all that information, you know, you can find out through this AI model.

And actually, a few weeks ago, I got I’m just curious, as you know, I’m very interested in technology I want to get inside so I actually get a person to present in our internal tech forum and I got to see you know, what kind of algorithms is build inside. It’s interesting because it’s actually not a single algorithm, it actually has a few it has you know machine learning algorithms, a deep learning algorithm is it is a variety of things people need to do in order to build a use case like this.

Daniel Newman: First of all, you know, Wei Lee, thank you so much for joining me here today on Futurum Tech Podcast interview series. I will say that I’m super big fan of what is going on with confidential computing. So the Account silent case is very interesting. Hopefully, I can share some details with our community and audience here. So you can read more about SGX and potentially about this use case, the sports one though, really near and dear to my heart, I’m a huge soccer fan or football depending on where you are in the world. I also really like golf, and I am wondering if somehow AI could help me with the mechanics of my golf swing, because anytime I’ve watched it on video, I’ve just said to myself, like that is not at all what I see in my head when I’m doing it. So I know I can do better. But at the performance level, you have to imagine, you know, we’re already near perfection with these athletes, I mean, we’re going to get them to the point where we are hitting optimum performance to the, you know, to that nanosecond, with their mechanics, they’re running, they’re jumping there, and it’s just going to make sports awesome. But I think to your point, before I say goodbye, I think what I really took away from what you said today is, AI will be everywhere. And there are going to be many different ways that people are going to be able to consume it. There’s frameworks and software and hardware, and it’s going to be bringing all these things together. And clearly, Intel is in the mix in a big way, playing in the cloud, at the data center and the enterprise, at the Edge. And at this point, though, I just got to say thanks. I’m going to post you off to the green room.

Wei Li: Okay, thank you.

Daniel Newman: Thanks, Wei Lee. All right, everybody. Well, that’s the show today. I want to thank Wei Lee again from Intel. What a great bit of insight following that third generation Zeon, scalable launch from yesterday, not just really so much about the launch, but really just about the whole state of AI where we’re going to see it in our lives. And how Intel is approaching it from the hardware layer all the way out to the abstraction and software layers that you heard him mention. Go ahead and hit subscribe, join our show, stay part of the Futurum Tech Podcast community, we really value having you here. Go ahead and follow us on Twitter at Futurum research. And check out Wei Lee all the things out there. He had a great 10 minute keynote on AI. We’ll put that link in our show notes as well for this episode of the Futurum Tech Webcast Interview Series, for Futurum Research. Got to say goodbye. Thanks again for tuning in. We’ll see you later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.


Latest Insights:

Shopping Muse Uses Generative AI to Help Shoppers Find Just What They Need—Even Without the Exact Words to Describe It
Sherril Hanson, Senior Analyst at The Futurum Group, breaks down Dynamic Yield by Mastercard’s new personal shopping assistant solution, Shopping Muse, that can provide a personalized digital shopping experience.
On this episode of The Six Five – On The Road, hosts Daniel Newman and Patrick Moorhead welcome Chetan Kapoor, Director at AWS EC2 for a conversation on AWS Generative AI Infrastructure announced at AWS re:Invent.
A Deep Dive into Databricks’ RAG Tool Suite
Steven Dickens, VP and Practice Leader at The Futurum Group, shares his insights on Databricks' RAG suite and Vector Search service, which are reshaping AI application development.
Marvell Industry Analyst Day 2023 Sharpened Its Vision and Strategy to Drive Infrastructure Silicon Innovation Key to Advancing Accelerated Computing
The Futurum Group’s Ron Westfall believes Marvell is solidly positioned to drive infrastructure silicon innovation for accelerated computing throughout 2024 and beyond, especially as the advanced computing opportunity expands during AI’s ascent.