AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

The News: On Monday, March 18, I virtually attended the Beyond the Output: Navigating the Ethical Challenges of Generative AI panel at the NVIDIA GTC 2024 conference. The panel discussed ethics, bias, intellectual property (IP) protection, indemnification, open source, and the role of developers.

The panel:

  • Iain Cunningham, Vice President & Deputy General Counsel, NVIDIA
  • Brian Green, Director of Technology Ethics, Markkula Center for Applied Ethics, Santa Clara University
  • Gina Moape, Lead, Community Engagement, Mozilla Foundation
  • Nikki Pope, Senior Director, AI and Legal Ethics, NVIDIA

Visit the NVIDIA website for a replay of the panel session.

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

Analyst Take: There is no question that the output of generative AI products can amaze and surprise us, though the latter might not always be in a good way. Bias is a well-known concern for machine learning (ML) in general. Other ethical and legal concerns have come to the forefront since the introduction of closed- and open-source generative AI models and delivery platforms. This panel offered good guidance for users and was rightfully specific about developers becoming educated on the ethical issues, incorporating safeguards into code, and advising those seeking to regulate the technology. Here, I add context to and expand upon several points they made.

Security and Ethics First for Generative AI

Tackling areas of bias, privacy, and safety for AI goes under several names, such as Responsible AI and Trustworthy AI. Major vendors, including NVIDIA, IBM, and Microsoft, provide education and guidance on the topic, and they are all worth reading.

Fifteen or 20 years ago, we educated software developers that security was not an afterthought in writing code. It was not someone else’s problem but theirs. This idea is well-engrained in the best software teams after years of education, training, and campaigning. We must do the same for AI regarding ethics and legal issues.

Legislation and Regulation

On March 13, the European Parliament adopted the Artificial Intelligence Act. This law bans several kinds of applications, including any that “manipulates human behavior or exploits people’s vulnerabilities.” It has several exemptions for law enforcement, often a contentious topic, and is explicit regarding the rules for using AI in high-risk areas, including health, privacy, infrastructure, and democratic processes such as elections. The legislation also imposes transparency requirements and marking of content as having been manipulated by AI.

Just as the General Data Protection Regulation (GDPR) had global ramifications regarding collecting and using personal data, the Artificial Intelligence Act will strongly affect those using closed- or open-source AI software in the EU or with data gathered in the EU. I advise generative AI developers and vendors to get ahead of this now and distinguish yourselves in the market by being among the first to comply. Expect to see similar laws enacted in other regions and countries.

A discussion within the panel highlighted the role of developers in providing mitigation and protection options to legislators and regulators. Though the speakers phrased this in a way that implied that those making the rules are not always technically savvy, in my experience, they and their staff are well-versed on the issues. Nevertheless, the point is a good one. People see the software’s results and output and are unaware of the architecture and design choices. Developers must use their organizations’ channels to legislators and regulators to provide thoughtful and articulate alternatives for ameliorating ethical, privacy, and safety problems.

To Open Source or Not for Generative AI

On March 17, Elon Musk’s xAI released several components of its Grok-1 large language model (LLM) on GitHub under the open-source Apache License 2.0. Whether he did this for competitive or other reasons, Grok-1 joins Meta’s Llama 2, Salesforce’s XGen-7B, Mistral’s 7B and 8x7B, and other open-source LLMs available for research and further development.

Any time someone open sources a new class of software, the usual questions come up. Is it more or less secure than closed source? Can I get support for it or will my use of it be DIY? What happens if bad people get the source?

These are all pertinent questions for open-source generative AI models. They are also questions with years of code and experience that help answer them, especially regarding security. We must now do the same for ethical, privacy, and legal concerns with open-source AI code. We will know more if researchers and developers can access good code, find solutions, and share them. That sounds simplistic, but it is true. Code talks, and we need to learn from doing. History shows that a good mix of open- and closed-source provides a good balance for innovation.

Key Takeaway: Put Ethics, Privacy, and Responsibility First While Developing and Using Generative AI Models

The panel was a good and thoughtful discussion. It was not the first on ethics and AI and it will not be the last. Collaboration among all stakeholders in the generative AI ecosystem is mandatory to find workable and sufficient solutions to the speakers’ concerns.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author is a former IBM employee and holds an equity position in the company. The author does not hold any equity positions with any other company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Quantinuum Announces Breakthroughs for Quantum Computing Scale-Up

Quantum in Context: Pasqal Is the Latest to Publish a Roadmap

Quantum in Context: Rigetti Q4 2023 Earnings and Other Numbers

Author Information

Dr. Bob Sutor

Dr. Bob Sutor is a Consulting Analyst for Futurum and an expert in quantum technologies with 40+ years of experience. He is an accomplished author of the quantum computing book Dancing with Qubits, Second Edition. Bob is dedicated to evolving quantum to help solve society's critical computational problems. For Futurum, he helps clients understand sophisticated technologies and how to make the best use of them for success in their organizations and industries.

He’s the author of a book about quantum computing called Dancing with Qubits, which was published in 2019, with the Second Edition released in March 2024. He is also the author of the 2021 book Dancing with Python, an introduction to Python coding for classical and quantum computing. Areas in which he’s worked: quantum computing, AI, blockchain, mathematics and mathematical software, Linux, open source, standards management, product management and marketing, computer algebra, and web standards.

SHARE:

Latest Insights:

Dell and 6WIND Join Forces to Provide Scalable Virtualized Networking Offerings for CSPs and Enterprises
Ron Westfall, Research Director at The Futurum Group, examines Dell’s partnership with 6WIND, integrating VSR software with PowerEdge servers to enhance virtualized networking for CSPs and enterprises.
The 28W 793nm Pump Laser Diode Offers High Polarization Purity, Enhancing Thulium Fiber Laser Efficiency in Medical and Industrial Applications
Olivier Blanchard, Research Director at The Futurum Group, analyzes Coherent’s 793nm pump laser diode, a breakthrough in Thulium fiber laser efficiency.
Eduardo Ciliendo, Greg Lotko and Daniel Newman explore the evolving mainframe ecosystem, examining how it drives innovation, unlocks data insights, and shapes the future of digital transformation.
Blurring the Traditional Boundaries Between Storage and Compute, VAST Data Hopes to Further Disrupt the Vector Database Market Through Its Unique Approach to Linear Scalability
Brad Shimmin and Stephen Foskett at The Futurum Group examine VAST Data's platform update, evaluating its AI-ready data capabilities and how those will disrupt the broader agentic AI marketplace.

Thank you, we received your request, a member of our team will be in contact with you.