Menu

Qualcomm Introduces Cloud AI 100 Powering Cloud Inference

Qualcomm Introduces Cloud AI 100 Powering Cloud Inference

 

This week at the AI Hardware Summit, Qualcomm VP of Product Management Ziad Asghar presented the company’s new AI inference chip, the Cloud AI 100, placing the company square in the middle of the AI inference conversation.

While the Cloud AI 100 demonstrations at the event represented the workings of a prototype, Qualcomm expects to have the solution in market in 2020. I expect this to up level Qualcomm’s AI profile; in particular in the AI inference discussion.  I believe that AI inference will be the next frontier for AI as the mass of training data and models stem the demand for inference; especially as mobile continues to rapidly proliferate

The Mobile AI Challenge

We have entered the 5G era, but not only 5G, but also AI. This means billions of connected devices, massive volumes of data being created, and a race for new services at the edge.

For mobile AI to work, the technology needs to be able to handle complex concurrencies, large and complicated neural networks, real-time demand and the need for low power consumption. Low power, is something that Qualcomm truly outperforms at as the company has built IP that enables it to baseline using milliwatts as opposed to 10s of watts. While this sounds somewhat benign, it will be extremely important as the exponential increase in cloud inferencing is driving up power needs in the cloud at a fast pace. To counter that trend, companies that can do more AI compute at the lowest power possible are best positioned.

To put all of this into perspective, perhaps noting the pure scale of mobile growth that lies ahead. With more than 7.3 Billion units expected to ship between 2018 and 2023, mobile will be the most diverse challenge in the near future.

The Cloud AI 100 appears to be designed with these specific challenges in mind.

What Was On Display?

During the AI Hardware summit, Qualcomm demonstrated for the first time the Cloud AI 100 running ResNet50, performing real time inferencing on an FPGA platform. Built on 7nm , Qualcomm’s Cloud AI 100 currently delivers more than 350 TOPS, which enables it to provide best-in-class inferencing.

The Cloud AI 100 will support the full software and application stack including Pytorch, TensorFlow and more.

Target Markets for the Cloud AI 100?

Qualcomm’s new inference chip is designed to handle inferencing workloads for multiple markets with the following markets identified.

Datacenter: A competitive offering against current AI inference chips available in the market focused on traditional datacenter workloads.

Autonomous Vehicles: As ADAS levels continue to rise toward full autonomy, enhanced inferencing capability will be critical.

5G Infrastructure: The ability to handle complex load balancing tasks will be necessary as 5G infrastructure expansions rapidly grow in the coming years.

5G Edge: Applications to deliver on the promise of smart cities and future retail environments.

Qualcomm: A Sure Competitor With the Cloud AI 100

While Qualcomm has been quiet in the AI chip discussion, the company is making its intentions clear with the launch of the Cloud AI 100. And It shouldn’t be surprising to see the company enter this market given its role in 5G and Mobile Technology. This gives Qualcomm a deep knowledge in signal processing, low-power computing and global scale. All of which can be developed utilizing the latest process nodes.

As far as I can see, having another company with a strong track record of innovation driving the competitive development of AI Inference chips will continue to drive innovation.

Read more Analysis from Futurum Research:

Qualcomm Wins Partial Stay In FTC Ruling, Overturn Likely To Follow

VMware and NVIDIA Partnership Accelerates AI From On-Prem To The Cloud

IBM Wisely Goes Open Source With Its Power CPU Architecture

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Related Insights
Elastic Q3 FY 2026 Strong Quarter, but Reacceleration Thesis Unproven
March 3, 2026

Elastic Q3 FY 2026: Strong Quarter, but Reacceleration Thesis Unproven

Nick Patience, VP and Practice Lead for AI Platforms at Futurum reviews Elastic Q3 FY 2026 earnings, highlighting sales-led subscription momentum, AI context engineering adoption, and agentic workflow expansion across...
NVIDIA's $4B Optics Bet Signals Photonics as AI's Next Bottleneck
March 3, 2026

NVIDIA’s $4B Optics Bet Signals Photonics as AI’s Next Bottleneck

Brendan Burke, Research Director at Futurum, shares insights on NVIDIA's $4 billion investment in Coherent and Lumentum to address the emerging optical interconnect bottleneck constraining AI data center scaling amid...
POCO and MediaTek Aim to Redefine Mid-Tier Performance Expectations in the Mobile Segment
March 3, 2026

POCO and MediaTek Aim to Redefine Mid-Tier Performance Expectations in the Mobile Segment

Olivier Blanchard, research Director at Futurum, dives into POCO’s X8 Pro and POCO X8 Pro Max launches, and discusses how, with the help of MediaTek, the Xiaomi subsidiary is redefining...
CoreWeave Q4 FY 2025 Results Highlight Backlog Growth And Capacity Expansion
March 3, 2026

CoreWeave Q4 FY 2025 Results Highlight Backlog Growth And Capacity Expansion

Futurum Research reviews CoreWeave’s Q4 FY 2025 earnings, focusing on backlog-driven capacity expansion, platform monetization beyond GPUs, and execution cadence shaping AI infrastructure supply....
Dell Q4 FY 2026 Earnings Highlight AI-Optimized Server Ramp
March 2, 2026

Dell Q4 FY 2026 Earnings Highlight AI-Optimized Server Ramp

Futurum Research analyzes Dell’s Q4 FY 2026 earnings, focusing on AI-optimized server scale, backlog expansion, and implications for infrastructure and client markets....
Snowflake Q4 FY 2026 Results Highlight AI-Led Consumption and Platform Expansion
March 2, 2026

Snowflake Q4 FY 2026 Results Highlight AI-Led Consumption and Platform Expansion

Brad Shimmin, Vice President & Practice Lead at Futurum analyzes Snowflake’s Q4 FY 2026 earnings, highlighting AI-driven consumption growth, expanding platform scope, and guidance shaping expectations for FY 2027....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.