Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

Of all the AI chip strategies from all the generative AI startups in the world, ChatGPT’s Open AI is reportedly contemplating its own move to enter the AI chip-making business. According to a recent report by Reuters, OpenAI, the creator of ChatGPT, is exploring the idea of either building its own chips, acquiring an existing chip maker, or expanding its pool of chip suppliers beyond NVIDIA, which is its lone supplier today. The Reuters report said the company has been discussing various options since 2022 due to shortages of the kinds of AI chips that are needed in its work. No decision has been made by OpenAI on the matter, according to Reuters, but this possibility certainly raises some interesting issues.

Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

I get it that ChatGPT’s OpenAI might want more control of the availability and pricing of AI chips, which are critical to the company’s operations and business. That makes perfect sense. So, the Reuters story is right on point and relates an intriguing storyline that is apparently unfolding in the executive offices and hallways of OpenAI.

At first glance, it looks like creating its own AI chips could be a good idea for OpenAI to consider. Sure, build your own chips so you do not have to rely on anyone else; you can produce and secure the chip supplies that you need to serve yourself and keep your company ahead of your competitors.

On second glance, though, you must think of the significant ramifications of making such a move―and they are not small. To start, there are the immense costs of taking on the manufacturing of AI chips. You think paying someone else for their chips is expensive? Then start looking at what it is going to cost you to design your own chips and then build your own chip-making facilities or line up a fab that might have the capacity to make them for you. And what is it going to cost you to develop a roadmap of new and better chips on a neverending schedule into the future? And as if that is not enough, what about your own supply chain worries about keeping the chips flowing and selling? There are an awful lot of zeroes in the price tags for such operations, even if you decide to go out and acquire an existing chip maker or hire a fab to make them for you.

Meanwhile, let us say that even with all these challenges you still decide to pursue the idea. Where does that leave you?

Well, none of these complex processes will happen quickly, so your new AI chips will just start coming to market years after you begin all this complex work. And that means that your competitors will have been working on their core technologies all that time―while you were just getting things off the ground. And in that timespan, those competitors will be upgrading and replacing their products with faster chips rather than spending their time and money just getting started. It seems to me that you might need a very long time to catch up and make it worthwhile, even if you could catch up.

You know, all of this makes my head spin. It makes me think of carmakers and the similar decisions they must make each year to introduce new car and truck models. Stamping machines that make body panels, engine production lines and casting systems that must be changed out, and a million other decisions and production steps are affected. And sometimes, by the time their new vehicle models are out after a few years, the market might have changed. Oops. OpenAI might not want to get into that situation at all.

My Bottom Line on the ChatGPT OpenAI Chip-Making Rumor

First, let us remember that so far, this story is just that, a rumor from an article from Reuters. Maybe this will not come to fruition. Maybe it is just the company venting in the marketplace.
However, maybe the idea for Open AI to produce its own chips is not so crazy.

Maybe OpenAI could pull off this idea with the continuing help of Microsoft and other financial backers that might envision real benefits from such an uphill battle. Yes, I might see it as more posturing and ego than as a smart strategy at this point, but maybe OpenAI foresees something I am missing in my own crystal ball.

Certainly, AI chips are never going to be inexpensive to develop. There must be a very attractive business reason to jump into that market, given all its risks. And maybe OpenAI has such a reason in mind that is on the company’s corporate radar and that none of us yet understand.

Or perhaps all this talk is just to stir the field and make some noise. OpenAI is not shy about making noise in the AI marketplace, so that is also possible.

For now, we will have to wait and see what happens. As technology watchers like me have followed the swift global expansion of generative AI and AI in the marketplace in the past few years, it would not have struck me that producing one’s own AI chips would be part of the strategy.

But as I think about it, OpenAI has never looked at things the way other companies look at things. With new ideas, visions, technologies, and directions, ChatGPT’s OpenAI could well be looking at solving its own IT challenges by using what could be yet another all-new approach, by producing its own AI chips. It will be fascinating to watch how this goes and to learn whether the rumors are true or not.

Other insights from The Futurum Group:

The Ramifications of ChatGPT Going Realtime Web

OpenAI ChatGPT Enterprise: A Tall Order

Google Cloud’s TPU v5e Accelerates the AI Compute War

SHARE:

Latest Insights:

Dan Waibel, Global Chief Data Officer at HPE, joins hosts to share insights on scaling AI in the enterprise, focusing on key areas including enterprise AI program design, agentic AI, and the critical role of EKM.
Hammerspace Tier 0 Now Available on Oracle Cloud Marketplace for High-Performance Computing (HPC) and AI Workloads
Brad Shimmin, VP and Practice Lead at Futurum, shares his insights on how the availability of Hammerspace Tier 0 on Oracle Cloud Marketplace boosts OCI’s value for AI and HPC workloads by delivering low-latency, high-throughput data access.
Coherent Launches COMPACT EVOLUTION AC, a 500 W Air-Cooled Diode Laser for Industrial Applications
Ray Wang, Research Director at Futurum, shares his insights on Coherent’s launch of the COMPACT EVOLUTION AC laser system and its implications for polymer welding, industrial integration, and production line efficiency.

Latest Research:

In our latest market brief, Unlocking the Future of Hybrid Cloud with Red Hat OpenShift Virtualization, developed in partnership with Red Hat, The Futurum Group outlines the evolving virtualization landscape, the economic and operational drivers behind infrastructure modernization, and the technical innovations powering OpenShift’s hybrid cloud strategy.
In our latest Market Brief, The AI-Powered Content Revolution, created in partnership with Egnyte, Futurum explores how AI is transforming cloud content management from static storage into a dynamic system of insight, automation, and compliance. The report offers guidance on how enterprises can deploy AI agents, streamline knowledge discovery, and future-proof their content operations.
In our latest market brief, Modern Data Protection for Modern Threats: A Strategic Blueprint for Cyber Resilience, written in collaboration with Quantum, Futurum outlines how IT leaders can move beyond reactive data protection toward proactive recovery-readiness by implementing a multi-tiered and performance-optimized cyber resilience architecture.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.