OpenAI ChatGPT Enterprise: A Tall Order

OpenAI ChatGPT Enterprise: A Tall Order

The News: On August 28, OpenAI announced the general availability of ChatGPT Enterprise, which the company says will deliver enterprise-grade security and privacy, unlimited higher speed GPT-4 access, longer context windows, and advanced data analysis capabilities.

The key elements of the solution at launch include:

  • Protects enterprise data
  • Does not train on a customer’s business data or conversations and models do not learn from customer usage
  • Is SOC 2 compliant, and all conversations are encrypted
  • Features a new administrator console that enables customers to manage team members and offers domain verification and single sign-on (SSO)
  • Leverages GPT-4 with no usage caps and performs up to two times faster than the previous version
  • Includes 32K token context windows, enabling users to process four times longer inputs or files
  • Provides unlimited access to Code Interpreter, ChatGPT’s data analysis tool
  • Enables feature customization through APIs

OpenAI says it is working on more features and will be sharing a more detailed roadmap. Read the full blog post on the launch of ChatGPT Enterprise on the Open AI website.

OpenAI ChatGPT Enterprise: A Tall Order

Analyst Take: Let’s be frank—the generative AI moment is upon us because of OpenAI’s ChatGPT. No other company or product has had anywhere close to the impact. But OpenAI was launched as a research-focused startup and was not purpose-built as a business to deliver enterprise-grade products. The pressure is on OpenAI to evolve its business model, and ChatGPT Enterprise is clearly part of that evolution. What impact will ChatGPT Enterprise have? Here is what we think.

Does ChatGPT Enterprise Mitigate LLM Challenges?

In enterprise settings, ChatGPT has been handled delicately to mitigate the inherent challenges of a large language model (LLM) trained on enormous amounts of public data (inaccuracy, bias, hallucinations, explainability, etc.) A few examples come to mind—Microsoft added layers of enterprise security and customization for ChatGPT, and Salesforce customizes ChatGPT, pointing models at curated private domains thereby sidestepping data quality concerns.

Do these new features for ChatGPT Enterprise address enough concerns for businesses to fully embrace it? Of the features listed, the most important ones in that regard are the data protection and customization features. Some enterprises will certainly experiment with ChatGPT Enterprise to see if the data privacy and customization elements can mitigate ChatGPT’s challenges at a cost that makes sense.

Pressure to Monetize

Inspiring ideas and technologies do not always make sellable, affordable products. Acknowledging that reality, OpenAI’s valuation is mind-blowing at more than $27 billion; however, that worth is not necessarily tied to OpenAI as a profit-making entity. The company’s value is more logically (at this point) as an acquisition.

But today, OpenAI is a standalone company with reported revenue of $30 million in 2022 and an estimated target of $200 million in 2023. Costs for OpenAI are estimated to be substantial. There are reports that the company spent more than $540 million in 2022 to develop ChatGPT, and that it costs more than $700,000 a day to run it. The business case for profitability will not run through the $20 per month single individual ChatGPT licenses the company currently offers—the logical market to find success is in the enterprise market. OpenAI did not publish enterprise license pricing.

Path Through Partners

Perhaps OpenAI has already found its enterprise strategy with Microsoft as its go-to-market partner. OpenAI is clearly a technology innovator, but the company was not designed as a profit-making business. Why not simply focus on being the best technology partner you can be to those who do have business chops? In Microsoft, OpenAI has more than that. Not only does Microsoft understand and execute supremely with enterprise-grade software and has unparalleled enterprise sales channels and relationships, the company is one of the world leaders in AI innovation and productization as well.

Perhaps the best move for OpenAI is not to sell anything directly but rather focus on being a technology partner to enterprise-focused organizations.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group

Generative AI War? ChatGPT Rival Anthropic Gains Allies, Investors

Tech Giants and White House Join Forces on Safe AI Usage

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Focus on Research, Data Analysis, Collaboration, and IT Governance in Microsoft 365 Copilot
Keith Kirkpatrick, Research Director at The Futurum Group, shares his insights on Microsoft 365 Copilot Wave 2’s new AI agents and collaboration features that aim to expand AI’s role across enterprise workflows and boost operational productivity.
Q1 FY 2025 Results Show Intel Maintaining Topline but Facing Margin Pressure
Richard Gordon and Daniel Newman explore Intel’s Q1 FY 2025 results, highlighting flat revenue, margin challenges, and CEO Lip-Bu Tan’s early turnaround actions.
Dataiku Hopes Its LLM Mesh Can Help Enterprises Quell AI Agent Technology Sprawl Through Technology-Agnostic Yet Centralized Control, Orchestration, and Management
Brad Shimmin, VP & Practice Lead, Data & Analytics at The Futurum Group, shares insights on Dataiku's strategy to tame the "agentic AI fray" with new AI Agent tools on its LLM Mesh, focusing on centralized control and orchestration, discussed at Everyday AI.
T-Mobile Q1 2025 Earnings Shine with Revenue Growth and Record Customer Additions
Ron Westfall at The Futurum Group analyzes T-Mobile’s Q1 FY 2025 earnings, highlighting why it delivered robust results, including revenue growth, record customer additions, and new 5GA, T-Fiber, and T-Satellite updates.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.