Menu

Intel Axes Nervana AI Chips Making Habana The Path Forward

The News: Intel said it is ending work on its Nervana neural network processors in favor of the artificial intelligence chips it gained with the chipmaker’s recent $2 billion acquisition of Habana Labs.

The Santa Clara, Calif.-based company said Friday it has ended development of its Nervana NNP-T training chips and will deliver on current customer commitments for its Nervana NNP-I inference chips, so that it can move forward with Habana Labs’ Gaudi and Goya processors in their place. Read the full news item on CRN.

Analyst Take: After the acquisition, I was outspoken that this would likely mean the end of Nervana experiment for Intel, despite the fact the company had finally started shipping the long awaited inference and training chips from the 2016 acquisition that was made for upwards of $350 Million. This has now been confirmed and I think it was a positive step for Intel.

Why Intel Will End its Commitment to Nervana

The decision was actually simpler than it may seem. The big money invested in Nervana would have typically given Intel pause, but Intel knows the importance of the AI space and Habana and Nervana in many ways tackle the same problems so making the investment to try to market both would likely have ended in disaster for the company. More focus and immediate focus are a sound approach forward.

Long story short, the Nervana chips weren’t as capable as the Habana chips.  In recent benchmark tests done by MLPerf comparing the performance of Nervana and Habana found that two Nervana NNP-I chips racked up 10.567 inputs per second in ResNet-50. Meanwhile, a single Habana Goya chip was able to reach 14,451 inputs per second in the same test. It also helps that Goya has been shipping for 2 years while the NNP-I hasn’t yet become generally available.

The Gaudi fabric is also highly important and differentiated. Remote Direct Memory Access (RDMA), which allows memory to be shared across nodes without taxing the CPU, allows a much greater scale. With Habana incorporating ethernet makes the chips more affordable and faster than other offerings in the market. From the time the acquisition was announced, my thoughts were that Habana provides greater scale than Nervana; Intel seemingly agreed.

Habana also is designed for a more open, ubiquitous ecosystem that will be important for the product success in the long run. This fits into Intel’s XPU strategy to bring AI across different chip architectures together. This isn’t a minor detail, it’s critical for long term success.

Overall Impressions on the Decision to Commit to Habana

As I have been saying, it is highly important to Intel that this next wave of AI investments and products are successful. This isn’t a trend that Intel wants to be a laggard on and while some of the company’s recent challenges in process technology and supply have dominated the headlines, AI is still a huge market opportunity in which Intel will be fighting for greater presence.

With Habana having the more capable chipset and the overall duplicative nature of the two product sets, I believe Intel made a sound decision to focus on the solution with the most potential-something that Intel of yesteryear didn’t necessarily always do. This is a fast fail situation in which the company should benefit in the long run despite some hard to palate costs that will be absorbed from the $350 million+ spent on Nervana.

A good move for Intel, but a move that will ultimately be judged by the future of the performance of Intel in the AI space, which will be more clear in the next 12-24 months than it is today.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Other insights from the Futurum Research team:

Honeywell and Verizon Partner to Accelerate Smart Grid

IBM Announces Change at the Top: Driving The Company Into The Future

Cloud Momentum Powers Another Huge Quarter for Microsoft

Image Credit: Intel

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Related Insights
The Storage Era is Dead; Long Live Everpure!
February 25, 2026

The Storage Era is Dead; Long Live Everpure!

Brad Shimmin, VP and Practice Lead at Futurum, shares his insights on Pure Storage’s rebrand to Everpure as well as its supportive acquisition of 1touch.io, exploring why dropping "Storage" is...
Five9 Q4 FY 2025 Earnings Revenue Beat, AI Momentum, Cash Flow High
February 25, 2026

Five9 Q4 FY 2025 Earnings: Revenue Beat, AI Momentum, Cash Flow High

Keith Kirkpatrick, VP & Research Director, Enterprise Software & Digital Workflows at Futurum, notes Five9’s Q4 FY 2025 AI momentum and record bookings signal strong H2 FY 2026 growth....
Amazon Ads MCP Server Debuts, Streamlining AI-Managed Campaign Execution
February 24, 2026

Amazon Ads MCP Server Debuts, Streamlining AI-Managed Campaign Execution

Futurum Research examines the Amazon Ads MCP Server and how AI-managed workflows streamline ad execution while redefining the role of human oversight in Amazon advertising....
Cohere’s Multilingual & Sovereign AI Moat Ahead of a 2026 IPO
February 20, 2026

Cohere’s Multilingual & Sovereign AI Moat Ahead of a 2026 IPO

Nick Patience, AI Platforms Practice Lead at Futurum, breaks down the impact of Cohere's Tiny Aya and Rerank 4 launches. Explore how these efficient models and the new Model Vault...
Will NVIDIA’s Meta Deal Ignite a CPU Supercycle
February 20, 2026

Will NVIDIA’s Meta Deal Ignite a CPU Supercycle?

Brendan Burke, Research Director at Futurum, analyzes NVIDIA and Meta's expanded partnership, deploying standalone Grace and Vera CPUs at hyperscale, signaling that agentic AI workloads are creating a new discrete...
CoreWeave ARENA is AI Production Readiness Redefined
February 17, 2026

CoreWeave ARENA is AI Production Readiness Redefined

Alastair Cooke, Research Director, Cloud and Data Center at Futurum, shares his insights on the announcement of CoreWeave ARENA, a tool for customers to identify costs and operational processes for...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.