Menu

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

The News: On Monday, March 18, I virtually attended the Beyond the Output: Navigating the Ethical Challenges of Generative AI panel at the NVIDIA GTC 2024 conference. The panel discussed ethics, bias, intellectual property (IP) protection, indemnification, open source, and the role of developers.

The panel:

  • Iain Cunningham, Vice President & Deputy General Counsel, NVIDIA
  • Brian Green, Director of Technology Ethics, Markkula Center for Applied Ethics, Santa Clara University
  • Gina Moape, Lead, Community Engagement, Mozilla Foundation
  • Nikki Pope, Senior Director, AI and Legal Ethics, NVIDIA

Visit the NVIDIA website for a replay of the panel session.

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

Analyst Take: There is no question that the output of generative AI products can amaze and surprise us, though the latter might not always be in a good way. Bias is a well-known concern for machine learning (ML) in general. Other ethical and legal concerns have come to the forefront since the introduction of closed- and open-source generative AI models and delivery platforms. This panel offered good guidance for users and was rightfully specific about developers becoming educated on the ethical issues, incorporating safeguards into code, and advising those seeking to regulate the technology. Here, I add context to and expand upon several points they made.

Security and Ethics First for Generative AI

Tackling areas of bias, privacy, and safety for AI goes under several names, such as Responsible AI and Trustworthy AI. Major vendors, including NVIDIA, IBM, and Microsoft, provide education and guidance on the topic, and they are all worth reading.

Fifteen or 20 years ago, we educated software developers that security was not an afterthought in writing code. It was not someone else’s problem but theirs. This idea is well-engrained in the best software teams after years of education, training, and campaigning. We must do the same for AI regarding ethics and legal issues.

Legislation and Regulation

On March 13, the European Parliament adopted the Artificial Intelligence Act. This law bans several kinds of applications, including any that “manipulates human behavior or exploits people’s vulnerabilities.” It has several exemptions for law enforcement, often a contentious topic, and is explicit regarding the rules for using AI in high-risk areas, including health, privacy, infrastructure, and democratic processes such as elections. The legislation also imposes transparency requirements and marking of content as having been manipulated by AI.

Just as the General Data Protection Regulation (GDPR) had global ramifications regarding collecting and using personal data, the Artificial Intelligence Act will strongly affect those using closed- or open-source AI software in the EU or with data gathered in the EU. I advise generative AI developers and vendors to get ahead of this now and distinguish yourselves in the market by being among the first to comply. Expect to see similar laws enacted in other regions and countries.

A discussion within the panel highlighted the role of developers in providing mitigation and protection options to legislators and regulators. Though the speakers phrased this in a way that implied that those making the rules are not always technically savvy, in my experience, they and their staff are well-versed on the issues. Nevertheless, the point is a good one. People see the software’s results and output and are unaware of the architecture and design choices. Developers must use their organizations’ channels to legislators and regulators to provide thoughtful and articulate alternatives for ameliorating ethical, privacy, and safety problems.

To Open Source or Not for Generative AI

On March 17, Elon Musk’s xAI released several components of its Grok-1 large language model (LLM) on GitHub under the open-source Apache License 2.0. Whether he did this for competitive or other reasons, Grok-1 joins Meta’s Llama 2, Salesforce’s XGen-7B, Mistral’s 7B and 8x7B, and other open-source LLMs available for research and further development.

Any time someone open sources a new class of software, the usual questions come up. Is it more or less secure than closed source? Can I get support for it or will my use of it be DIY? What happens if bad people get the source?

These are all pertinent questions for open-source generative AI models. They are also questions with years of code and experience that help answer them, especially regarding security. We must now do the same for ethical, privacy, and legal concerns with open-source AI code. We will know more if researchers and developers can access good code, find solutions, and share them. That sounds simplistic, but it is true. Code talks, and we need to learn from doing. History shows that a good mix of open- and closed-source provides a good balance for innovation.

Key Takeaway: Put Ethics, Privacy, and Responsibility First While Developing and Using Generative AI Models

The panel was a good and thoughtful discussion. It was not the first on ethics and AI and it will not be the last. Collaboration among all stakeholders in the generative AI ecosystem is mandatory to find workable and sufficient solutions to the speakers’ concerns.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author is a former IBM employee and holds an equity position in the company. The author does not hold any equity positions with any other company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Quantinuum Announces Breakthroughs for Quantum Computing Scale-Up

Quantum in Context: Pasqal Is the Latest to Publish a Roadmap

Quantum in Context: Rigetti Q4 2023 Earnings and Other Numbers

Author Information

Dr. Bob Sutor

Dr. Bob Sutor is an expert in quantum technologies with 40+ years of experience. He is the accomplished author of the quantum computing book Dancing with Qubits, Second Edition. Bob is dedicated to evolving quantum to help solve society's critical computational problems.

Related Insights
IBM Q4 FY 2025 Software and Z Cycle Lift Growth and FCF
January 30, 2026

IBM Q4 FY 2025: Software and Z Cycle Lift Growth and FCF

Futurum Research analyzes IBM’s Q4 FY 2025, highlighting software acceleration, the IBM Z AI cycle, and AI-driven productivity and M&A synergies supporting margin expansion and higher FY 2026 free cash...
ServiceNow Q4 FY 2025 Earnings Highlight AI Platform Momentum
January 30, 2026

ServiceNow Q4 FY 2025 Earnings Highlight AI Platform Momentum

Futurum Research analyzes ServiceNow’s Q4 FY 2025 results, highlighting AI agent monetization, platform consolidation in CRM/CPQ, and a security stack aimed at scaling agentic AI across governed workflows heading into...
Microsoft Q2 FY 2026 Cloud Surpasses $50B; Azure Up 38% CC
January 30, 2026

Microsoft Q2 FY 2026: Cloud Surpasses $50B; Azure Up 38% CC

Futurum Research analyzes Microsoft’s Q2 FY 2026 earnings, highlighting AI-led cloud demand, agent platform traction, and Copilot adoption amid record capex and a substantially expanded commercial backlog....
Will Acrobat Studio’s Update Redefine Productivity and Content Creation
January 29, 2026

Will Acrobat Studio’s Update Redefine Productivity and Content Creation?

Keith Kirkpatrick, VP and Research Director at Futurum, covers Adobe’s Acrobat Studio updates and provides his assessment of how this will impact the use of software to manage and automate...
Teradata Set to Turn Data Gravity Into AI Gold With Enterprise AgentStack
January 29, 2026

Teradata Set to Turn Data Gravity Into AI Gold With Enterprise AgentStack

Brad Shimmin, Vice President and Practice Lead at Futurum, analyzes Teradata’s launch of Enterprise AgentStack. He explores how Teradata is leveraging data gravity and robust governance to bridge the "production...
Amazon EC2 G7e Goes GA With Blackwell GPUs. What Changes for AI Inference
January 27, 2026

Amazon EC2 G7e Goes GA With Blackwell GPUs. What Changes for AI Inference?

Nick Patience, VP and AI Practice Lead at Futurum, examines Amazon’s EC2 G7e instances and how higher GPU memory, bandwidth, and networking change AI inference and graphics workloads....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.