CAIDP Files Complaint with FTC Against OpenAI’s GPT-4 for Violating Consumer Protection Rules

The News: On March 30, the Center for AI and Digital Policy (CAIDP), an artificial intelligence-focused tech ethics group, filed a complaint with the Federal Trade Commission (FTC) asking the FTC to investigate OpenAI for violating consumer protection rules, arguing that the organization’s rollout of AI text generation tools has been “biased, deceptive,” and a risk to public safety. Read more from Engadget on the CAIDP complaint.

CAIDP Files Complaint with FTC Against OpenAI’s GPT-4 for Violating Consumer Protection Rules

Analyst Take: While the public has largely embraced OpenAI’s ChatGPT, AI researchers and others have expressed trepidation about the speed at which AI technologies are being developed. In a high-profile open letter, tech leaders and prominent AI researchers have called for AI labs and companies to “immediately pause” their work. Steve Wozniak, OpenAI co-founder Elon Musk, and other experts who signed the letter believe AI technology risks warrant a minimum six-month break from producing technology beyond GPT-4. The letter adds that care and forethought are necessary to ensure the safety of AI systems but expressed concerns that these things are being ignored in the race to deploy the most advanced AI technology first. I agree. Moving fast, especially in the tech space, can be a huge advantage. Moving too quickly with technology that can be inherently biased, inaccurate, and where there are often massive ethical concerns can be incredibly dangerous. Exciting and transformative? Absolutely. But excitement alone can’t – and shouldn’t — assuage legitimate concerns.

The CAIDP’s FCC Complaint

Following the letter’s publication, the Center for AI and Digital Policy (CAIDP), an artificial intelligence-focused tech ethics group, filed a complaint with the Federal Trade Commission (FTC) asking the FTC to investigate OpenAI for violating consumer protection rules. CAIDP argues OpenAI is violating the FTC Act through its releases of large language AI models like GPT-4. According to the CAIDP complaint, the OpenAI model is “biased, deceptive,” and threatens both privacy and public safety. CAIDP president Marc Rotenberg was one of the letter’s signatories, and like the letter, the complaint calls for the slowing down of the development of generative AI models and for the implementation of stricter government oversight.

CAIDP Files Complaint with FTC Against OpenAI's GPT-4 for Violating Consumer Protection Rules
Image Source: OpenAI

The CAIDP complaint claims GPT-4, which was released earlier this month, was launched without any independent assessment, and without any way for outsiders to replicate OpenAI’s results. According to CAIDP, the GPT-4 system could be used to spread disinformation, contribute to cybersecurity threats, and potentially worsen or lock in many of the inherent biases already well-known to AI models.

The complaint goes on to present several scenarios, including ones in which AI models failed to recognize or act on potential hazards to children, facilitated corporate espionage, allowed cybercriminals with limited technical skills to develop malware such as ransomware and malicious code, and lowers the knowledge barriers needed to create successful cyberattacks.

The CAIDP complaint against OpenAI’s GPT-4 requests the FTC:

  • Halt further commercial deployment of any GPT by OpenAI
  • Establish independent assessment of GPT products prior to future deployment
  • Ensure that future deployment of GPT is in alignment with FTC AI guidance
  • Require constant independent assessment throughout the GPT AI’s lifecycle
  • Establish a publicly accessible reporting mechanism for incidents
  • Initiate rule-making that would establish baseline standards for products in the AI market sector

A full copy of the CAIDP complaint is available here:

While the public has eagerly embraced ChatGPT and GPT-4, along with companies locked in an AI race to deploy the technologies, tech leaders and AI researchers are wanting to put the brakes on further development to give time to better understand the potential impact and to get government oversight. And make no mistake: I’m a gen AI fan and see broad implications for its use. It’s exciting and we are at the beginning stages of seeing what’s possible. The technology’s potential for good is, without question, significant, but that doesn’t mean there shouldn’t be awareness and concern about the potential negative impacts it can have for the public without sufficient guardrails in place.

Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.

Other insights from Futurum Research:

Italy DPA Announces Ban of OpenAI’s ChatGPT, Will Other EU Countries Follow Suit?

Google Invests $300mn in Artificial Intelligence Start-Up Anthropic, Taking on ChatGPT

Google Bard Takes on Microsoft’s Bing ChatGPT Integration

Image Credit: CNBC

Author Information

Shelly Kramer is a Principal Analyst and Founding Partner at Futurum Research. A serial entrepreneur with a technology centric focus, she has worked alongside some of the world’s largest brands to embrace disruption and spur innovation, understand and address the realities of the connected customer, and help navigate the process of digital transformation. She brings 20 years' experience as a brand strategist to her work at Futurum, and has deep experience helping global companies with marketing challenges, GTM strategies, messaging development, and driving strategy and digital transformation for B2B brands across multiple verticals. Shelly's coverage areas include Collaboration/CX/SaaS, platforms, ESG, and Cybersecurity, as well as topics and trends related to the Future of Work, the transformation of the workplace and how people and technology are driving that transformation. A transplanted New Yorker, she has learned to love life in the Midwest, and has firsthand experience that some of the most innovative minds and most successful companies in the world also happen to live in “flyover country.”


Latest Insights:

Vendor Leverages Amazon Q on AWS to Drive Productivity and Access to Organizational Knowledge
The Futurum Group’s Daniel Newman and Keith Kirkpatrick cover SmartSheet’s use of Amazon Q to power its @AskMe chatbot, and discuss how the implementation should serve as a model for other companies seeking to deploy a gen AI chatbot.
Paul Nashawaty, Practice Lead at The Futurum Group, shares his insights on Alluxio Enterprise AI and the ability to achieve over 97% GPU utilization.
Reference Architectures, Customer Innovation Labs, and Industry-Standard Ethernet Enable Cost-Effective Ethernet for AI Training
Alastair Cooke, CTO Advisor at The Futurum Group, shares insights from the Juniper Networks presentation at Cloud Field Day 20 on 800GB Ethernet for AI training and the Juniper AI Innovation Lab.
Multiple Use Cases and Benefits Delivered via Generative AI
Keith Kirkpatrick, Research Director with The Futurum Group, discusses a case study example of efficiency, workflow enhancement, and overall service improvements delivered through Microsoft Copilot AI technology.