Menu

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

The News: On Monday, March 18, I virtually attended the Beyond the Output: Navigating the Ethical Challenges of Generative AI panel at the NVIDIA GTC 2024 conference. The panel discussed ethics, bias, intellectual property (IP) protection, indemnification, open source, and the role of developers.

The panel:

  • Iain Cunningham, Vice President & Deputy General Counsel, NVIDIA
  • Brian Green, Director of Technology Ethics, Markkula Center for Applied Ethics, Santa Clara University
  • Gina Moape, Lead, Community Engagement, Mozilla Foundation
  • Nikki Pope, Senior Director, AI and Legal Ethics, NVIDIA

Visit the NVIDIA website for a replay of the panel session.

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

Analyst Take: There is no question that the output of generative AI products can amaze and surprise us, though the latter might not always be in a good way. Bias is a well-known concern for machine learning (ML) in general. Other ethical and legal concerns have come to the forefront since the introduction of closed- and open-source generative AI models and delivery platforms. This panel offered good guidance for users and was rightfully specific about developers becoming educated on the ethical issues, incorporating safeguards into code, and advising those seeking to regulate the technology. Here, I add context to and expand upon several points they made.

Security and Ethics First for Generative AI

Tackling areas of bias, privacy, and safety for AI goes under several names, such as Responsible AI and Trustworthy AI. Major vendors, including NVIDIA, IBM, and Microsoft, provide education and guidance on the topic, and they are all worth reading.

Fifteen or 20 years ago, we educated software developers that security was not an afterthought in writing code. It was not someone else’s problem but theirs. This idea is well-engrained in the best software teams after years of education, training, and campaigning. We must do the same for AI regarding ethics and legal issues.

Legislation and Regulation

On March 13, the European Parliament adopted the Artificial Intelligence Act. This law bans several kinds of applications, including any that “manipulates human behavior or exploits people’s vulnerabilities.” It has several exemptions for law enforcement, often a contentious topic, and is explicit regarding the rules for using AI in high-risk areas, including health, privacy, infrastructure, and democratic processes such as elections. The legislation also imposes transparency requirements and marking of content as having been manipulated by AI.

Just as the General Data Protection Regulation (GDPR) had global ramifications regarding collecting and using personal data, the Artificial Intelligence Act will strongly affect those using closed- or open-source AI software in the EU or with data gathered in the EU. I advise generative AI developers and vendors to get ahead of this now and distinguish yourselves in the market by being among the first to comply. Expect to see similar laws enacted in other regions and countries.

A discussion within the panel highlighted the role of developers in providing mitigation and protection options to legislators and regulators. Though the speakers phrased this in a way that implied that those making the rules are not always technically savvy, in my experience, they and their staff are well-versed on the issues. Nevertheless, the point is a good one. People see the software’s results and output and are unaware of the architecture and design choices. Developers must use their organizations’ channels to legislators and regulators to provide thoughtful and articulate alternatives for ameliorating ethical, privacy, and safety problems.

To Open Source or Not for Generative AI

On March 17, Elon Musk’s xAI released several components of its Grok-1 large language model (LLM) on GitHub under the open-source Apache License 2.0. Whether he did this for competitive or other reasons, Grok-1 joins Meta’s Llama 2, Salesforce’s XGen-7B, Mistral’s 7B and 8x7B, and other open-source LLMs available for research and further development.

Any time someone open sources a new class of software, the usual questions come up. Is it more or less secure than closed source? Can I get support for it or will my use of it be DIY? What happens if bad people get the source?

These are all pertinent questions for open-source generative AI models. They are also questions with years of code and experience that help answer them, especially regarding security. We must now do the same for ethical, privacy, and legal concerns with open-source AI code. We will know more if researchers and developers can access good code, find solutions, and share them. That sounds simplistic, but it is true. Code talks, and we need to learn from doing. History shows that a good mix of open- and closed-source provides a good balance for innovation.

Key Takeaway: Put Ethics, Privacy, and Responsibility First While Developing and Using Generative AI Models

The panel was a good and thoughtful discussion. It was not the first on ethics and AI and it will not be the last. Collaboration among all stakeholders in the generative AI ecosystem is mandatory to find workable and sufficient solutions to the speakers’ concerns.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author is a former IBM employee and holds an equity position in the company. The author does not hold any equity positions with any other company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Quantinuum Announces Breakthroughs for Quantum Computing Scale-Up

Quantum in Context: Pasqal Is the Latest to Publish a Roadmap

Quantum in Context: Rigetti Q4 2023 Earnings and Other Numbers

Author Information

Dr. Bob Sutor

Dr. Bob Sutor is an expert in quantum technologies with 40+ years of experience. He is the accomplished author of the quantum computing book Dancing with Qubits, Second Edition. Bob is dedicated to evolving quantum to help solve society's critical computational problems.

Related Insights
CIO Take Smartsheet's Intelligent Work Management as a Strategic Execution Platform
December 22, 2025

CIO Take: Smartsheet’s Intelligent Work Management as a Strategic Execution Platform

Dion Hinchcliffe analyzes Smartsheet’s Intelligent Work Management announcements from a CIO lens—what’s real about agentic AI for execution at scale, what’s risky, and what to validate before standardizing....
Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth
December 22, 2025

Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth?

Keith Kirkpatrick, Research Director with Futurum, shares his insights on Zoho’s latest finance-focused releases, Zoho Spend and Zoho Billing Enterprise Edition, further underscoring Zoho’s drive to illustrate its enterprise-focused capabilities....
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Will a Digital Adoption Platform Become a Must-Have App in 2026?
December 15, 2025

Will a DAP Become the Must-Have Software App in 2026?

Keith Kirkpatrick, Research Director with Futurum, covers WalkMe’s 2025 Analyst Day, and discusses the company’s key pillars for driving success with enterprise software in an AI- and agentic-dominated world heading...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....
Oracle Q2 FY 2026 Cloud Grows; Capex Rises for AI Buildout
December 12, 2025

Oracle Q2 FY 2026: Cloud Grows; Capex Rises for AI Buildout

Futurum Research analyzes Oracle’s Q2 FY 2026 earnings, highlighting cloud infrastructure momentum, record RPO, rising AI-focused capex, and multicloud database traction driving workload growth across OCI and partner clouds....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.