The Future of Tech with Intel’s Rich Uhlig – Six Five On the Road

The Future of Tech with Intel’s Rich Uhlig - Six Five On the Road

On this episode of The Six Five – On The Road, sponsored by Intel, hosts Daniel Newman and Patrick Moorhead welcome Rich Uhlig, Intel Senior Fellow and Corporate VP, Director of Intel Labs for a conversation on Intel’s vision for the future of technology, including their latest developments and what Intel Labs has been working on in the AI field.

Their discussion covers:

  • Intel Labs’ Mission and Strategic Focus Areas
  • Cutting-Edge Advancements in Neuromorphic Computing and Silicon Photonics by Intel Labs
  • Intel Labs’ Contributions to the AI Landscape
  • Intel Labs’ Robust Approach to AI Security Concerns
  • An In-Depth Look at Intel’s Quantum Computing Advancements

Be sure to subscribe to The Six Five Webcast, so you never miss an episode.

Watch the video here:

Or Listen to the full audio here:

 

Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: Hi, this is Pat Moorhead, and Six Five is live at Intel Innovation 2023 in San Jose. Daniel, we are back on the road. We are talking about incredible tech, and maybe, just maybe, we’re going to talk a little bit of AI. What do you think?

Daniel Newman: I have not been to anything this year where we have not talked a little bit about AI. And I really haven’t been to anything where we haven’t talked a lot about AI. But yeah, Pat, you and I love to talk about how silicon will eat the world or semiconductors will eat the world.

Patrick Moorhead: Yes. I know that this group once said, “Software would eat the world.” I was thinking, “What are you going to run it on, air?” Right?

Daniel Newman: It’s not going to happen. And so, these events, for you and me, are always personal fan favorites because all the things that we’re hearing, all the apps that are being built, all the tools that we’re going to be able to use, all the productivity that we’re going to be able to gain only happens if the semiconductors are built to support that.

So, Pat, coming here, being with Intel for the next couple of days and talking to all of these folks here on the Six Five is a perfect way. You know what might be even more perfect is we start to get out into the future, we talk about what’s being built, what’s in the pipeline, talking to the people that aren’t only thinking about the stuff we’re hearing about today, but talking about the people that are building the stuff for tomorrow.

Patrick Moorhead: That’s right. It’s not just the pipeline or the roadmap, but it’s the building blocks that make up the roadmap. And you know I love this riff. We talk about R&D, but it’s actually R, which is research element of it, and then there’s the development side. You need research, it’s high risk, it could be 10 years in advance to build what is possible for developers to build in the future.

And what a great way to kick off Intel Innovation 2023 than to talk to one of the heads of research. That is Rich. Great to see you. Welcome to the Six Five. First time here. Thank you so much.

Rich Uhlig: It’s my pleasure to be on. It’s really great to chat with you, guys.

Patrick Moorhead: Yeah. Well, I love talking about research. It is just fundamental for everything that comes after it, and whether it’s creating new standards, creating new technologies, creating new implementations of technologies nobody even had thought of. That’s why I love to just soak and spend time. So, thanks for coming on the show.

Rich Uhlig: My pleasure.

Patrick Moorhead: Yeah.

Daniel Newman: So, I was alluding to the lab. Everything that we are going to experience in the future is generally being worked on. As you said, maybe it’s 10 years, maybe it’s five years, but there’s always that team, I call them the mad scientists, that are working behind the scenes, building out the future. And Rich, that’s your responsibility, at least part of your responsibilities at Intel. Give us the background on Intel Labs. Tell us a little bit about its history, its genesis, and what you’re focusing on today.

Rich Uhlig: Yeah. Well, I’m glad you made that distinction right at the beginning between the R&D. Intel Labs is an R organization.

Patrick Moorhead: It gets confused.

Rich Uhlig: It does get confused. We’re essentially Intel’s investment in the future, both for our company as well as for the industry. We take a long view. Sometimes it’s five years out. Sometimes it’s 10 years out. We’ve got examples of technologies that took 15 years to incubate. It’s some of the most patient money that Intel invests. I feel privileged to be able to lead the organization. I think it’s the best job at Intel, honestly.

Patrick Moorhead: Wait a second. I thought I had the best job. But I’m not at Intel, so we’re good. Okay. All right.

Daniel Newman: We’ll ask Pat about that when he’s on tomorrow. Best job at Intel.

Patrick Moorhead: Pat, he might agree. He might say, “Hey, you know what, that sounds like a lot of fun.”

Rich Uhlig: Well, actually, Pat Gelsinger should be credited with forming Intel Labs back in the day. Of course, he started his early career at Intel before he left for a few years at VMware and came back to be our CEO, but he was Intel’s first CTO. And one of the things that he did was, at the time, pulled together a lot of disparate pathfinding organizations that were sprinkled throughout the company.

Brought them in together into a single organization that covered the scope of what Intel needed to be investing in and originally known as the Corporate Technology Group. It became Intel Labs. It’s probably one of the longest lived organizations inside Intel in a company that oftentimes reorgs, right? So, it’s fairly stable patient money that invests for the long term.

Patrick Moorhead: Are there any specific technologies that you’re looking at right now that you can talk about?

Rich Uhlig: Yeah, sure. We could talk about neuromorphic, we could talk about quantum, we could talk about work we’re doing in silicon photonics.

Patrick Moorhead: Yeah, I’d love to hear more about silicon photonics and neuromorphic. We haven’t heard a lot about neuromorphic. And it’s funny, what I’ve noticed over time is that don’t confuse the amount of stuff that you see in the press with what actually is going on and have a pretty good idea of what’s happening in silicon photonics. But I’d like to hear from your point of view, what’s going on in Intel at silicon photonics and neuromorphic computing?

Rich Uhlig: Okay. Let’s start with neuromorphic and I think it’s a little nearer than some of the other technologies we’ve been working on. We’ve been on it for probably about five, six years now. Let me just start with some motivation. So, as the name suggests, what we’re trying to do is take inspiration from how biological brains work, and the rationale is simple.

It’s that if you imagine or if you think about, just reflect on what a human brain can do or what even the brain of a cockatiel parrot can do, it’s remarkable the capabilities within a very small energy and power envelope, right? Like a cockatiel parrot 50-watt power expenditure, and yet, it can navigate complex environments at high speeds. It can fashion tools out of sticks.

It can even mimic human speech. And if you look at the analog to that, the control system for a drone, it can do a lot of those same kinds of things, but the performance is far less. It’s not nearly as adaptable. The energy expenditure is much higher. And so, biologic brains are kind of an existence proof of what can happen with computing. And we’re trying to take inspiration from that.

And essentially, our methodology, the approach we take is to build working prototypes. This is true across all the research that Labs does, but we will identify a promising area of investigation, build hardware and software prototypes that demonstrate those ideas. And then, we really like to engage outside the company. So, one of the-

Patrick Moorhead: Just to get a realistic pulse outside, get other people working on it helping you?

Rich Uhlig: Yeah, exactly. Now, there’s a lot of smart people inside Intel, but we know that the world is full of smart people and we want to tap that understanding. And so, with neuromorphic, we’ve established what we call the Intel Neuromorphic Research community, and at this point, it’s up to about 700 researchers from academia, from industry, and from government agencies.
And we enable them with working neuromorphic prototypes. The latest is called Loihi 2, which is a second generation of the neuromorphic chip. And we give them the hardware and software, and then we challenge them to find applications for it. And so, we’ve been learning a lot from that. And yeah, it’s just part of our methodology.

Patrick Moorhead: Silicon photonics, right? First of all, it’s funny, old technologies rarely go away, and we’ve seen even in the networking going from copper to light, but silicon photonics is especially exciting when you can imagine it actually connecting blocks on a chip. And I know just from our research that Intel is one of the leaders in silicon photonics. What are you doing in that area right now? Even though silicon photonics, you can argue, is in the market today.

Rich Uhlig: It is, yeah. In fact, it’s one of those great examples of a long-term investment. Silicon photonics work started in Intel Labs more than two decades ago. Mario Paniccia was the original researcher. It took about 10 years to incubate, and then it helped to form what is now a product division inside Intel, silicon photonics product division.

And the story moving forward is to take silicon photonics and optical interconnect from long-range communication to much shorter range, think within the rack, think chip to chip, and to overcome some of the barriers that we’re seeing to further improvement in compute. Feeding the beast is one of the hardest problems. We know how to build compute engines, whether they be GPUs, CPUs, FPGAs, accelerators.

But the persistent challenge is getting data to the compute engine and to scale to larger installations. And we think that optical interconnect is the way to overcome some of those barriers. In particular, overcoming the shoreline density constraints with electrical signaling, it’s getting increasingly difficult to get bandwidth density into the chip.

And we think that with optical interconnect where we can multiplex multiple wavelengths over a single optical link, we can dramatically increase the bandwidth density. And if we get the energy efficiencies, then we can also do so in a way that allows for scale. But to do that requires a bunch of new ingredients to come together.

We need to implement new ways of building silicon-based lasers that generate multiple wavelengths. We need ways of modulating those signals with what we call microring modulators. We need ways of amplifying that light with the silicon-based implementation. And then, we need to have silicon-based photo receptors, and you need that whole end-to-end solution in order to actually be able to integrate those links into the package. And those are all the problems, one by one, that we’ve been trying to knock down.

Patrick Moorhead: Also seems like a benefit of something you would normally take either stacking very high dense logic on top of each other that could get hot. You might be able to spread that out, but you’re still operating at the speed of light.

Rich Uhlig: Yes. Yeah. That’s one of the big advantages of optical is that you get more reach of the interconnect, and that’s the challenge that electrical signaling has run into. It’s running at higher data rates, it goes from feet to inches to millimeters, and you run out of steam at that point.

Daniel Newman: It seems that the photonics opportunity, silicon photonics opportunity right now is very timely with where we’re heading directionally with AI. But I do want to go back just really quickly. I was geeking out when you were talking about neuromorphic, and I know geeking out is cool at Intel, so I’m allowed to do that.

Patrick Moorhead: Well, at least I heard it the last couple of years, three years.

Daniel Newman: The geek is back?

Patrick Moorhead: Yes. I’ve heard that from Pat Gelsinger before.

Rich Uhlig: From the people.

Daniel Newman: Yeah. So, I’m just making sure that we keep that in spirit here. But I did want to ask you, because you got into the technicalities, but you have this community of 700 or so. You’re working on this problem. Can you share just are there some applications that you guys have found because you are interested in where this is going? Because I see the future, but I’m just curious, where are you at? What are some of the applications that are surfacing now?

Rich Uhlig: Yeah. This is one of the great things with the community is there’s surprises at every turn. We’re learning new things working with them. And I’ll just organize them into two basic categories of things. There’s the applications that you would expect to be able to do well with a neuromorphic solution where biological brains excel.

And so, it’s things like simulating what the nose does. A nose is able to detect sense from chemical sensors and learn from a few examples. We’ve built an electronic nose with neuromorphic implementation.

Patrick Moorhead: So many funny applications, but yeah, I can think of some very valuable ones too.

Daniel Newman: Some recent ones.

Rich Uhlig: Yeah. Well, and it may seem, what would that be good for? Well, it could be good for detecting dangerous odors, and environments that could be hazardous, and to automate that, and have continuous ongoing protection. Those kinds of things could be good applications.

Daniel Newman: I think like a canine for detecting things at an airport, that would be-

Rich Uhlig: Yeah, a drug sniffing dog or any other kind of hazardous scent can be. And the key is that it learns very quickly and is super energy efficient. That’s another thing. So, there’re examples like that. A lot of perception control problems that we see in robotics seem to be a good match for neuromorphic, again, because they’re energy efficient with robotic systems, especially small drones or things like that that have energy constraints, that you want to economize in the control system.

So, there’re a lot of applications like that. But then the thing that’s surprised us is that neuromorphic systems seem to be good at problems that biological brains aren’t necessarily good at, namely optimization problems. So, if you look at, think about a constraint satisfaction problem, where you’re trying to find an optimal pathway, like a logistics planning of trucks moving, making deliveries, under constantly changing dynamic conditions, that is an important business problem to solve.

If you optimize well, you get efficiency of operations, and yet it’s hard to find the solution, an optimal solution. They’re classically difficult computational problems. And well, actually, I have a demo at Innovation on this, on a really complex satellite communications network. You’ve got hundreds of satellites orbiting the earth and you want to optimize the scheduling of the communications between them on a continuous basis.

Neuromorphic systems seem to be good at that, and as we scale them to larger installations, because we can cluster lots of Loihis together to build large systems, they can take on bigger and bigger problems. And so, we’re really excited about that because we didn’t expect that at the beginning. We don’t see neuromorphic as replacing other kinds of compute. It’s going to be a complement, but we thought it was going to be good for those energy efficient applications.

Daniel Newman: Interesting entanglement because I started to hear when you start talking about fixing things like civil, like that, it also makes me think of quantum, which is another thing, I’m sure, that has run through your lab. But I want to spend the time we have here on a topic where we started, and that’s artificial intelligence. I imagine we’re starting to see a lot of the commercialization.

We’re seeing it enter the mainstream. And as those solutions come to market, you are probably looking from a research standpoint further out into the future. So, talk a little bit about where this moment in AI sits and how Intel Labs is thinking about the opportunities for artificial intelligence.

Rich Uhlig: Yeah. You can’t swing a cat without hitting somebody working on AI.

Daniel Newman: Pretty much, out here.

Rich Uhlig: So, really, we almost don’t think of AI as a separate research area. We see it as something that’s infused across everything that we do. We look at AI methods to improve design efficiency for our chips. We’re looking at AI methods to apply it internally to Intel to increase yields for our fabrication facilities. We, of course, do research in Labs that will help to make Intel platforms better at running AI algorithms.

It’s a lot about how you do linear algebra, matrix, and tensor operations efficiently. A lot of the stuff that’s in Sapphire Rapids came from Labs. We worked on accelerator architectures, as well as other things that just assist in running AI algorithms more efficiently. But your question was going more to where are things headed?

And I think that one thing that we have our eye on is that if you look at… everybody’s talking about large language models now, and how they’re getting very big, first billions, and then tens of billions, and now trillions of model parameters, and they’re doing remarkable things. But you also have to ask the question, when do you run out of compute resources?

When does it become not economical anymore? It’s part of the motivation actually for neuromorphic because we’re always looking for that energy efficiency. But you can imagine, I think one of the things that’s emerging in LLMs, in large language models, and just training AI systems in general is that if you specialize for a particular application area, you can get close to the same accuracy, but with a much smaller model.

And that is a very interesting property because if you think about where’s this going to go, are we going to have just one model that eats the whole world and dissolves every problem, or are we going to see foundational models that get tuned to different application areas? And if it’s that secondary path, then what kind of ecosystem infrastructure do we need so that we can proliferate lots of different AI solutions?

And we think that’s a good match for the kinds of capabilities that Intel offers. We think it’s still important to train the large models, of course, but being able to tune to specific domains is something that we’re looking at. Now, I’ll put a plug for Gadi Singer. He’s going to give a talk at Innovation that’s going to get into details on this. And it’s worth checking out.

Patrick Moorhead: Wait, Gadi as in what the product is named after?

Rich Uhlig: Gadi, yeah. No.

Patrick Moorhead: Okay.

Rich Uhlig: Gaudi is our product, as in the Spanish architect, but Gadi Singer is the speaker who’ll be talking about that.

Daniel Newman: Just a little play on words.

Rich Uhlig: Yes.

Daniel Newman: Exactly. A lot of AI in that Gaudi there.

Rich Uhlig: Yeah.

Daniel Newman: There’s even an AI in the name.

Patrick Moorhead: I know. There we go. Gosh, you’re just-

Daniel Newman: Witty today.

Patrick Moorhead: You must have a bigger brain than the parakeet you were talking about on neuromorphic.

Daniel Newman: That’s where it sits. Let’s let him go.

Patrick Moorhead: So, I wanted to shift gears just slightly, still keeping the AI realm, but there’s been a lot of discussion, particularly under responsible AI. It’s funny, under generative, I’m sure there was for ML and DL too, but this notion of generative AI creating a massive security hole in the way it thinks and the way it operates.

And I like to look at that as… and this, I don’t think it’s changed much in insecurity over 30 years, is spy versus spy, where it used to be a person versus a team of people or a team of people against another team of people. But now it’s likely that it will be the machines versus the machines. I’m curious what you’re doing in Labs in researching AI security.

Rich Uhlig: Yeah, we’ve got a lot of things going on in there. One, you talk about generative AI, and of course one of the concerns well-founded in generative AI is creating fake material. Deepfakes are starting to emerge, and they have all kinds of consequences to society. I think we can appreciate that. But you can turn that into a technical problem, which is how do you detect a Deepfake?

How do you detect it automatically and at scale? And so, we are doing research in this area. We have something called fake catcher that’s able to look at video content and determine if it’s likely generated fake content or if it’s authentic. And it uses lots of different techniques, different modalities of sensing. For example, a video camera can actually detect the pulse on your face based on your heart beating.

And that can be a signal of authenticity if it’s a fake that isn’t present, and you can build detection mechanisms around that. That’s just one example. There’re many other modalities. That’s actually how the brain, many times, will also look at how light comes off the bone structure in the face, how we determine how it’s 99.9%, but I know that’s not real. So, that’s really interesting.

So, we’re looking at how to do that with accuracy. We’re looking at how to do it with efficiency because it’ll be kind of an arms race. There’ll be new fake methods, and then we need to find new ways of detecting them. And then, we need to be able to do them at scale so that you can attest to the authenticity of material. So, we think that’s an important sort of ethical AI, responsible AI technology.

Daniel Newman: I like that you’re pursuing that too. The potential damage in an era where people basically judge first is so the stakes are really high. Whether it’s politicians, enterprise business leaders, community leaders being wrong, because you don’t see a lot of positive use cases for Deepfakes. It feels like it’s mostly nefarious. I suppose there could be some positive use cases, like White Hat

Patrick Moorhead: Intertalking or …

Daniel Newman: … Entertainment for content.

Patrick Moorhead: We’ve talked about the Pat Bot and the Dan bot.

Daniel Newman: Yeah, I wish that I could just sit at home and these videos could be created. That’d be great. But largely, to your point though, there’s such a small period of time building a technology that could help a viewer instantly know that this is likely not real. Because right now, it seems a lot of bad… We’ve seen over the past few years, societally speaking, how unable people are to separate fact and fiction. And as it proliferates, Rich, it gets scary. It gets scary. And this is something I hope Intel can play a really significant part in helping fix.

Rich Uhlig: We’re looking out for you guys because your podcasts, if they aren’t authentic, then how-

Patrick Moorhead: I appreciate it. Well, worst case, we say something which just makes no sense, we can just say it’s a Pat bot.

Daniel Newman: Yeah.

Patrick Moorhead: Right?

Rich Uhlig: Yes.

Daniel Newman: There are large language models now that they hallucinate and people just blame it on… I don’t know what they blame it on. Yeah, it drifts a little bit.

Rich Uhlig: Exactly.

Daniel Newman: Yeah. So, by the way, we’re testing a lot of those and there is a lot of drift, but it is really fascinating. But what about the security side as Pat had sort of asked you, you hit responsibility?

Rich Uhlig: Yeah. So, let’s talk a little bit about an area called federated learning, which is really interesting. It does touch on security as well, because we all know that AI is driven by data, and the more data you have, the better the models get. But data doesn’t always move freely for good reasons. If you think of medical records, for example, there’s both privacy concerns around it. There’s regulatory requirements around that kind of data. And yet, that can limit the ability to apply AI methods. So, I’ll give an example.

Brain tumor detection, if you have, let’s say, fMRI data from lots of patients, and you would like to automate the process of segmenting brain tumors in those fMRI images, the more data you have to train models to do that, the higher the accuracy will be, except that that kind of data is siloed in different medical organizations all over the globe.

Again, for good reason. So, federated learning is an approach to learning over those siloed data sets while still preserving the ownership, and the integrity of that data and the privacy of the data. It’s an approach to moving the model training around between the federation of data owners while they are permitted to maintain control of the data.

And there’s tech underlying that. You need to have things like trusted execution environments. Intel has SGX, TDX, these are both architectural support we have in our platforms to make sure that if you run these federated learning algorithms over these data silos, the data does remain protected. And so, I think that’s a good example of that intersection between security and AI.

Patrick Moorhead: No, that’s great. Sometimes it’s difficult to get caught up, probably less than the research. I get caught up with what, oh, what everybody’s talking about on the outside, which is right now AI, AI, AI. I guess ML and DL never went away. It’s AI2, it’s a different kind of AI, but one topic that, in my opinion, seems to have a consistent burn is quantum computing.
Nobody can agree on when we’re going to get to something usable. Some people say 5, some people say 10, some people say 15 years. Intel is very much in the game in researching quantum, and I was wondering if you’d give us an update on this.

Rich Uhlig: Yeah. We like to think that we’re realistic about where quantum is. We’re trying to be the adults in the room.

Patrick Moorhead: Well, you also have every area covered.

Rich Uhlig: Yeah.

Patrick Moorhead: You have neuromorphic, you have ML, DL, generative AI, and everything kind of in between. And silicon photonics, how it plays a role in about 27 other core technologies we’re not going to be able to hit.

Rich Uhlig: Yeah, and that’s part of the methodology because we’re not sure ourselves what’s going to actually work. So, we have to place… We don’t invest in everything, but we do invest in a pretty diverse way. Before I get into quantum, I’ll just make it tied to neuromorphic. You asked about neuromorphic algorithms. There’s a class of algorithms called QUBO algorithms, quadratic unconstrained binary optimization algorithms, which are actually-

Daniel Newman: Say that three times fast.

Rich Uhlig: Yeah, just say QUBO, QUBO.

Patrick Moorhead: I prefer QUBO. Thank you.

Rich Uhlig: Actually, that’s also an algorithm that’s explored in quantum, but you can implement it with neuromorphic. And so, one of things that we’re interested in is seeing if QUBO algorithms might actually be better implemented in a neuromorphic system. Good example of how two bets allow us to win either way, depending on how things unfold. But the way we think about quantum is that you have to have a story for how you’ll scale over time.

Because most of the interesting applications for quantum are going to need millions of qubits, these quantum bits. And most systems today are in the hundreds of qubits. And in many cases, there isn’t a story for how you’ll scale to the millions. And so, how are we doing this? So, the first is that our selection of qubits is key. We are doing spin qubits that essentially trap a single electron in a transistor-like structure.

And these spin qubits are about a million times smaller than most qubits that you’ll see from some of the other players. And to me, I liken it to a vacuum tube versus a transistor. You can build a computer out of vacuum tubes, absolutely, but can you scale it over time, over decades? No. History showed us that that didn’t happen. And so, we’re starting with a qubit that we believe has an ability to scale over time. We’re building it in our 300-millimeter fabrication facilities.

Patrick Moorhead: High quality. It lasts a long time. You can do a bunch with it.

Rich Uhlig: Exactly. So, that’s the first thing. We picked a qubit that’s small, simply stated. So, it can scale to millions and beyond. The second thing is we think it’s important to have a program where the learning rate is high. And so, we’ve invested in things like cryo probers where we can test an entire wafer of qubit devices at very low temperature because that’s where these devices operate.

They’re all operating close to zero kelvin. If there’s any thermal noise, the qubits decohere and they just don’t work. And so, it’s actually a real problem for debug. If you have to put these devices into dilution fridges and it takes hours, if not days, to bring them down to the low temperatures, then you can do a test and then you bring them back out again.

The learning cycle is very slow if you don’t have that investment. So, we can test entire wafers at these low temperatures and learn much more quickly. Another thing that we’re doing is cryo-CMOS. Qubits need to be controlled, they have to be configured, and they have to be read out in order to do compute. And that means that you need to have the control electronics running at low temperatures just like the qubits do.

And the problem with a lot of solutions today is that you look at some of the pictures of quantum systems, there are these coax cables, hundreds of them that are controlling the qubits, and it is hundreds because you can’t fit more coax cables in that and you can see the scale problem. So, by being able to run CMOS at low temperatures, really close to the qubits, we think we’ve got a pathway to scale.

So, these are just examples of how we’re doing this. We didn’t start with only spin qubits. We were doing transmon or superconducting qubits, and we came to the conclusion that they weren’t going to scale. And so, now we’re placing all our bets on the spin qubits. And we got a new device that we call Tunnel Falls. It’s a 12-qubit implementation.

In the same way that we collaborate outside Intel with neuromorphic, we’re doing so in quantum. So, we’re sharing these devices with academic institutions so they can test them, so they can learn from them. We can learn together.

Daniel Newman: Rich, there’s so much here.

Patrick Moorhead: Exactly.

Daniel Newman: If we tried to do it in half an hour, we could easily go an hour a day.

Rich Uhlig: This will be the Sixty Fifty, not the Six Five.

Daniel Newman: No, I love it. Listen, there’s some podcasts where you hear things you know, and there’s some where you walk away and you’re like, “I learned a lot.” And talking to Rich here, there’re a lot of things here that I truly could say I did not know. So, I appreciate very much you joining us here on the Six Five.

Rich Uhlig: Yeah. It was a great chat. I enjoyed it.

Daniel Newman: Let’s have a great innovation event and let’s have you back soon.

Rich Uhlig: Yeah, I would love to. Appreciate it. Thanks guys.

Daniel Newman: All right. Check out all the conversations that Patrick and I are having here at the event. Subscribe. Join us for all of the Six Five episodes. There’s so much here. For now, we got to say goodbye. We’ll see you all later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.