CAIDP Files Complaint with FTC Against OpenAI’s GPT-4 for Violating Consumer Protection Rules

The News: On March 30, the Center for AI and Digital Policy (CAIDP), an artificial intelligence-focused tech ethics group, filed a complaint with the Federal Trade Commission (FTC) asking the FTC to investigate OpenAI for violating consumer protection rules, arguing that the organization’s rollout of AI text generation tools has been “biased, deceptive,” and a risk to public safety. Read more from Engadget on the CAIDP complaint.

CAIDP Files Complaint with FTC Against OpenAI’s GPT-4 for Violating Consumer Protection Rules

Analyst Take: While the public has largely embraced OpenAI’s ChatGPT, AI researchers and others have expressed trepidation about the speed at which AI technologies are being developed. In a high-profile open letter, tech leaders and prominent AI researchers have called for AI labs and companies to “immediately pause” their work. Steve Wozniak, OpenAI co-founder Elon Musk, and other experts who signed the letter believe AI technology risks warrant a minimum six-month break from producing technology beyond GPT-4. The letter adds that care and forethought are necessary to ensure the safety of AI systems but expressed concerns that these things are being ignored in the race to deploy the most advanced AI technology first. I agree. Moving fast, especially in the tech space, can be a huge advantage. Moving too quickly with technology that can be inherently biased, inaccurate, and where there are often massive ethical concerns can be incredibly dangerous. Exciting and transformative? Absolutely. But excitement alone can’t – and shouldn’t — assuage legitimate concerns.

The CAIDP’s FCC Complaint

Following the letter’s publication, the Center for AI and Digital Policy (CAIDP), an artificial intelligence-focused tech ethics group, filed a complaint with the Federal Trade Commission (FTC) asking the FTC to investigate OpenAI for violating consumer protection rules. CAIDP argues OpenAI is violating the FTC Act through its releases of large language AI models like GPT-4. According to the CAIDP complaint, the OpenAI model is “biased, deceptive,” and threatens both privacy and public safety. CAIDP president Marc Rotenberg was one of the letter’s signatories, and like the letter, the complaint calls for the slowing down of the development of generative AI models and for the implementation of stricter government oversight.

CAIDP Files Complaint with FTC Against OpenAI's GPT-4 for Violating Consumer Protection Rules
Image Source: OpenAI

The CAIDP complaint claims GPT-4, which was released earlier this month, was launched without any independent assessment, and without any way for outsiders to replicate OpenAI’s results. According to CAIDP, the GPT-4 system could be used to spread disinformation, contribute to cybersecurity threats, and potentially worsen or lock in many of the inherent biases already well-known to AI models.

The complaint goes on to present several scenarios, including ones in which AI models failed to recognize or act on potential hazards to children, facilitated corporate espionage, allowed cybercriminals with limited technical skills to develop malware such as ransomware and malicious code, and lowers the knowledge barriers needed to create successful cyberattacks.

The CAIDP complaint against OpenAI’s GPT-4 requests the FTC:

  • Halt further commercial deployment of any GPT by OpenAI
  • Establish independent assessment of GPT products prior to future deployment
  • Ensure that future deployment of GPT is in alignment with FTC AI guidance
  • Require constant independent assessment throughout the GPT AI’s lifecycle
  • Establish a publicly accessible reporting mechanism for incidents
  • Initiate rule-making that would establish baseline standards for products in the AI market sector

A full copy of the CAIDP complaint is available here: https://www.caidp.org/cases/openai/.

While the public has eagerly embraced ChatGPT and GPT-4, along with companies locked in an AI race to deploy the technologies, tech leaders and AI researchers are wanting to put the brakes on further development to give time to better understand the potential impact and to get government oversight. And make no mistake: I’m a gen AI fan and see broad implications for its use. It’s exciting and we are at the beginning stages of seeing what’s possible. The technology’s potential for good is, without question, significant, but that doesn’t mean there shouldn’t be awareness and concern about the potential negative impacts it can have for the public without sufficient guardrails in place.

Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.

Other insights from Futurum Research:

Italy DPA Announces Ban of OpenAI’s ChatGPT, Will Other EU Countries Follow Suit?

Google Invests $300mn in Artificial Intelligence Start-Up Anthropic, Taking on ChatGPT

Google Bard Takes on Microsoft’s Bing ChatGPT Integration

Image Credit: CNBC

Author Information

Shelly Kramer is a serial entrepreneur with a technology-centric focus. She has worked alongside some of the world’s largest brands to embrace disruption and spur innovation, understand and address the realities of the connected customer, and help navigate the process of digital transformation.

Related Insights
Agentic ERP Model
May 1, 2026

Can NetSuite’s Agentic ERP Model Survive the SaaS ‘Apocalypse’ and Win the Next AI Platform War?

Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Digital Workflows at Futurum, examines how NetSuite's agentic ERP model aims to deliver real AI ROI and counter the fragmenting...
Fusion Applications
May 1, 2026

Oracle Bets on Outcome-Driven AI Agents, But Will Enterprises Buy the Vision?

Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Di at Futurum, examines Oracle's pivot toward AI agents embedded in Fusion Applications, analyzing enterprise demand for measurable business value,...
Marketplace Integration
May 1, 2026

Assessing Ingram Micro’s Q1 2026: Cyclical Growth or Structural Channel Shift?

Ingram Micro's Q1 2026 results show distributors must shift from logistics to marketplace orchestrators or risk disintermediation as CIOs consolidate platforms and adopt AI....
Microsoft Dynamics 365
May 1, 2026

Is Microsoft Dynamics 365 Contact Center the Catalyst for Agentic CX at Scale?

Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Di at Futurum, Microsoft Dynamics 365 Contact Center's coordinated AI agents transform customer experience orchestration, challenging fragmented legacy solutions....
Alphabet Q1 FY 2026 AI Demand Surges as Cloud Capacity Caps Growth
May 1, 2026

Alphabet Q1 FY 2026: AI Demand Surges as Cloud Capacity Caps Growth

Futurum Research analyzes Alphabet’s Q1 FY 2026 earnings, focusing on Cloud AI demand, Search monetization changes, and rising capacity investment tied to TPUs and infrastructure....
Will ElevenMusic’s AI Platform Disrupt How Music Is Created and Monetized?
May 1, 2026

Will ElevenMusic’s AI Platform Disrupt How Music Is Created and Monetized?

ElevenLabs launches ElevenMusic, an AI platform letting creators discover, remix, and earn from fully licensed music while addressing copyright concerns that plagued earlier AI generators....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.