Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

Of all the AI chip strategies from all the generative AI startups in the world, ChatGPT’s Open AI is reportedly contemplating its own move to enter the AI chip-making business. According to a recent report by Reuters, OpenAI, the creator of ChatGPT, is exploring the idea of either building its own chips, acquiring an existing chip maker, or expanding its pool of chip suppliers beyond NVIDIA, which is its lone supplier today. The Reuters report said the company has been discussing various options since 2022 due to shortages of the kinds of AI chips that are needed in its work. No decision has been made by OpenAI on the matter, according to Reuters, but this possibility certainly raises some interesting issues.

Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

I get it that ChatGPT’s OpenAI might want more control of the availability and pricing of AI chips, which are critical to the company’s operations and business. That makes perfect sense. So, the Reuters story is right on point and relates an intriguing storyline that is apparently unfolding in the executive offices and hallways of OpenAI.

At first glance, it looks like creating its own AI chips could be a good idea for OpenAI to consider. Sure, build your own chips so you do not have to rely on anyone else; you can produce and secure the chip supplies that you need to serve yourself and keep your company ahead of your competitors.

On second glance, though, you must think of the significant ramifications of making such a move―and they are not small. To start, there are the immense costs of taking on the manufacturing of AI chips. You think paying someone else for their chips is expensive? Then start looking at what it is going to cost you to design your own chips and then build your own chip-making facilities or line up a fab that might have the capacity to make them for you. And what is it going to cost you to develop a roadmap of new and better chips on a neverending schedule into the future? And as if that is not enough, what about your own supply chain worries about keeping the chips flowing and selling? There are an awful lot of zeroes in the price tags for such operations, even if you decide to go out and acquire an existing chip maker or hire a fab to make them for you.

Meanwhile, let us say that even with all these challenges you still decide to pursue the idea. Where does that leave you?

Well, none of these complex processes will happen quickly, so your new AI chips will just start coming to market years after you begin all this complex work. And that means that your competitors will have been working on their core technologies all that time―while you were just getting things off the ground. And in that timespan, those competitors will be upgrading and replacing their products with faster chips rather than spending their time and money just getting started. It seems to me that you might need a very long time to catch up and make it worthwhile, even if you could catch up.

You know, all of this makes my head spin. It makes me think of carmakers and the similar decisions they must make each year to introduce new car and truck models. Stamping machines that make body panels, engine production lines and casting systems that must be changed out, and a million other decisions and production steps are affected. And sometimes, by the time their new vehicle models are out after a few years, the market might have changed. Oops. OpenAI might not want to get into that situation at all.

My Bottom Line on the ChatGPT OpenAI Chip-Making Rumor

First, let us remember that so far, this story is just that, a rumor from an article from Reuters. Maybe this will not come to fruition. Maybe it is just the company venting in the marketplace.
However, maybe the idea for Open AI to produce its own chips is not so crazy.

Maybe OpenAI could pull off this idea with the continuing help of Microsoft and other financial backers that might envision real benefits from such an uphill battle. Yes, I might see it as more posturing and ego than as a smart strategy at this point, but maybe OpenAI foresees something I am missing in my own crystal ball.

Certainly, AI chips are never going to be inexpensive to develop. There must be a very attractive business reason to jump into that market, given all its risks. And maybe OpenAI has such a reason in mind that is on the company’s corporate radar and that none of us yet understand.

Or perhaps all this talk is just to stir the field and make some noise. OpenAI is not shy about making noise in the AI marketplace, so that is also possible.

For now, we will have to wait and see what happens. As technology watchers like me have followed the swift global expansion of generative AI and AI in the marketplace in the past few years, it would not have struck me that producing one’s own AI chips would be part of the strategy.

But as I think about it, OpenAI has never looked at things the way other companies look at things. With new ideas, visions, technologies, and directions, ChatGPT’s OpenAI could well be looking at solving its own IT challenges by using what could be yet another all-new approach, by producing its own AI chips. It will be fascinating to watch how this goes and to learn whether the rumors are true or not.

Other insights from The Futurum Group:

The Ramifications of ChatGPT Going Realtime Web

OpenAI ChatGPT Enterprise: A Tall Order

Google Cloud’s TPU v5e Accelerates the AI Compute War

SHARE:

Latest Insights:

Dell Introduces a Discrete Enterprise-Grade NPU in a Mobile Form Factor for On-Device AI Model Inferencing
Olivier Blanchard, Research Director at Futurum, shares insights on Dell’s Pro Max Plus mobile workstation and its enterprise-grade NPU powering on-device AI development without cloud reliance.
Dell and NVIDIA Announce Next-Gen Enterprise AI Infrastructure and Services To Streamline Deployment Across the Full AI Lifecycle
Olivier Blanchard, Research Director at Futurum, shares insights on Dell and NVIDIA’s upgraded AI Factory and how it enables enterprises to deploy high-performance, full-stack AI infrastructure with integrated services and tools.
Dell Strengthens AI Leadership With Infrastructure, Edge Inferencing, Partner Solutions, and Energy-Efficient Cooling Across Its AI Factory Offerings
Olivier Blanchard, Research Director at Futurum, shares insights on Dell’s expanded AI Factory offerings, including infrastructure, AI PCs, and partner solutions aimed at making enterprise AI more scalable and energy-efficient.

Latest Research:

In our latest Market Brief, Building Optimal Cyber Resilience with All-Flash Protection Infrastructure, developed in partnership with Dell Technologies, The Futurum Group explores how next-gen data protection infrastructure can safeguard critical workloads and support rapid recovery, with a focus on Dell’s PowerProtect Data Domain All-Flash appliance.
In our latest Research Brief, Oracle Database@Azure: The Genesis of Oracle’s Multi-Cloud Leadership, completed in partnership with Oracle, The Futurum Group explores how enterprises can simplify migration, reduce costs, and modernize operations while gaining a competitive edge in AI-driven application development.
In our latest Research Brief, Hammerspace Tier 0: Unlocking Greater Efficiency in GPU-Driven Computing, The Futurum Group explores how organizations can overcome latency and storage inefficiencies by unlocking stranded NVMe capacity within GPU servers.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.