Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

Of all the AI chip strategies from all the generative AI startups in the world, ChatGPT’s Open AI is reportedly contemplating its own move to enter the AI chip-making business. According to a recent report by Reuters, OpenAI, the creator of ChatGPT, is exploring the idea of either building its own chips, acquiring an existing chip maker, or expanding its pool of chip suppliers beyond NVIDIA, which is its lone supplier today. The Reuters report said the company has been discussing various options since 2022 due to shortages of the kinds of AI chips that are needed in its work. No decision has been made by OpenAI on the matter, according to Reuters, but this possibility certainly raises some interesting issues.

Is ChatGPT’s OpenAI Looking to Make Its Own AI Chips?

I get it that ChatGPT’s OpenAI might want more control of the availability and pricing of AI chips, which are critical to the company’s operations and business. That makes perfect sense. So, the Reuters story is right on point and relates an intriguing storyline that is apparently unfolding in the executive offices and hallways of OpenAI.

At first glance, it looks like creating its own AI chips could be a good idea for OpenAI to consider. Sure, build your own chips so you do not have to rely on anyone else; you can produce and secure the chip supplies that you need to serve yourself and keep your company ahead of your competitors.

On second glance, though, you must think of the significant ramifications of making such a move―and they are not small. To start, there are the immense costs of taking on the manufacturing of AI chips. You think paying someone else for their chips is expensive? Then start looking at what it is going to cost you to design your own chips and then build your own chip-making facilities or line up a fab that might have the capacity to make them for you. And what is it going to cost you to develop a roadmap of new and better chips on a neverending schedule into the future? And as if that is not enough, what about your own supply chain worries about keeping the chips flowing and selling? There are an awful lot of zeroes in the price tags for such operations, even if you decide to go out and acquire an existing chip maker or hire a fab to make them for you.

Meanwhile, let us say that even with all these challenges you still decide to pursue the idea. Where does that leave you?

Well, none of these complex processes will happen quickly, so your new AI chips will just start coming to market years after you begin all this complex work. And that means that your competitors will have been working on their core technologies all that time―while you were just getting things off the ground. And in that timespan, those competitors will be upgrading and replacing their products with faster chips rather than spending their time and money just getting started. It seems to me that you might need a very long time to catch up and make it worthwhile, even if you could catch up.

You know, all of this makes my head spin. It makes me think of carmakers and the similar decisions they must make each year to introduce new car and truck models. Stamping machines that make body panels, engine production lines and casting systems that must be changed out, and a million other decisions and production steps are affected. And sometimes, by the time their new vehicle models are out after a few years, the market might have changed. Oops. OpenAI might not want to get into that situation at all.

My Bottom Line on the ChatGPT OpenAI Chip-Making Rumor

First, let us remember that so far, this story is just that, a rumor from an article from Reuters. Maybe this will not come to fruition. Maybe it is just the company venting in the marketplace.
However, maybe the idea for Open AI to produce its own chips is not so crazy.

Maybe OpenAI could pull off this idea with the continuing help of Microsoft and other financial backers that might envision real benefits from such an uphill battle. Yes, I might see it as more posturing and ego than as a smart strategy at this point, but maybe OpenAI foresees something I am missing in my own crystal ball.

Certainly, AI chips are never going to be inexpensive to develop. There must be a very attractive business reason to jump into that market, given all its risks. And maybe OpenAI has such a reason in mind that is on the company’s corporate radar and that none of us yet understand.

Or perhaps all this talk is just to stir the field and make some noise. OpenAI is not shy about making noise in the AI marketplace, so that is also possible.

For now, we will have to wait and see what happens. As technology watchers like me have followed the swift global expansion of generative AI and AI in the marketplace in the past few years, it would not have struck me that producing one’s own AI chips would be part of the strategy.

But as I think about it, OpenAI has never looked at things the way other companies look at things. With new ideas, visions, technologies, and directions, ChatGPT’s OpenAI could well be looking at solving its own IT challenges by using what could be yet another all-new approach, by producing its own AI chips. It will be fascinating to watch how this goes and to learn whether the rumors are true or not.

Other insights from The Futurum Group:

The Ramifications of ChatGPT Going Realtime Web

OpenAI ChatGPT Enterprise: A Tall Order

Google Cloud’s TPU v5e Accelerates the AI Compute War


Latest Insights:

CTERA Adds Cyber-Resilience with New Honeypot Functionality
Mitch Lewis, Research Analyst at The Futurum Group, covers CTERA’s new honeypot cyber-resiliency functionality.
Quantinuum Announced a Dramatic Improvement in Error Rates that Should Lead to Faster Adoption of Quantum Error-Correcting Codes
The Futurum Group’s Dr. Bob Sutor discusses Quantinuum’s announcement of achieving better than 99.9% 2-qubit gate fidelity and what this means for quantum error correction.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Apple Vision Pro developers losing interest, U.S. awards Samsung and Micron over $6B in CHIPS Act funding, does AMD have a datacenter AI GPU problem, Adobe’s use of Midjourney, Samsung knocks Apple off of number 1 market share, and Arm says CPUs can save 15% of total datacenter power.

Latest Research:

In this white paper, Operationalizing the Circular Economy: How HP is Reinventing Sustainability for the Tech Sector, you will learn the five fundamental challenges standing in the way of this transition, and how to address them.
Our latest research report, Endpoint Security Trends 2023, digs into modern attack techniques and how IT and security practitioners can most effectively respond and react, grounded in quantitative survey feedback.
The Futurum Group's latest research report, 2023 Cloud Downtime Incident Report, presents a detailed examination of downtime incidents across major cloud providers. By meticulously collecting and analyzing publicly available incident reporting data, we uncover patterns and practices surrounding outages and incident reporting.