Search
Close this search box.

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

The News: On Monday, March 18, I virtually attended the Beyond the Output: Navigating the Ethical Challenges of Generative AI panel at the NVIDIA GTC 2024 conference. The panel discussed ethics, bias, intellectual property (IP) protection, indemnification, open source, and the role of developers.

The panel:

  • Iain Cunningham, Vice President & Deputy General Counsel, NVIDIA
  • Brian Green, Director of Technology Ethics, Markkula Center for Applied Ethics, Santa Clara University
  • Gina Moape, Lead, Community Engagement, Mozilla Foundation
  • Nikki Pope, Senior Director, AI and Legal Ethics, NVIDIA

Visit the NVIDIA website for a replay of the panel session.

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

Analyst Take: There is no question that the output of generative AI products can amaze and surprise us, though the latter might not always be in a good way. Bias is a well-known concern for machine learning (ML) in general. Other ethical and legal concerns have come to the forefront since the introduction of closed- and open-source generative AI models and delivery platforms. This panel offered good guidance for users and was rightfully specific about developers becoming educated on the ethical issues, incorporating safeguards into code, and advising those seeking to regulate the technology. Here, I add context to and expand upon several points they made.

Security and Ethics First for Generative AI

Tackling areas of bias, privacy, and safety for AI goes under several names, such as Responsible AI and Trustworthy AI. Major vendors, including NVIDIA, IBM, and Microsoft, provide education and guidance on the topic, and they are all worth reading.

Fifteen or 20 years ago, we educated software developers that security was not an afterthought in writing code. It was not someone else’s problem but theirs. This idea is well-engrained in the best software teams after years of education, training, and campaigning. We must do the same for AI regarding ethics and legal issues.

Legislation and Regulation

On March 13, the European Parliament adopted the Artificial Intelligence Act. This law bans several kinds of applications, including any that “manipulates human behavior or exploits people’s vulnerabilities.” It has several exemptions for law enforcement, often a contentious topic, and is explicit regarding the rules for using AI in high-risk areas, including health, privacy, infrastructure, and democratic processes such as elections. The legislation also imposes transparency requirements and marking of content as having been manipulated by AI.

Just as the General Data Protection Regulation (GDPR) had global ramifications regarding collecting and using personal data, the Artificial Intelligence Act will strongly affect those using closed- or open-source AI software in the EU or with data gathered in the EU. I advise generative AI developers and vendors to get ahead of this now and distinguish yourselves in the market by being among the first to comply. Expect to see similar laws enacted in other regions and countries.

A discussion within the panel highlighted the role of developers in providing mitigation and protection options to legislators and regulators. Though the speakers phrased this in a way that implied that those making the rules are not always technically savvy, in my experience, they and their staff are well-versed on the issues. Nevertheless, the point is a good one. People see the software’s results and output and are unaware of the architecture and design choices. Developers must use their organizations’ channels to legislators and regulators to provide thoughtful and articulate alternatives for ameliorating ethical, privacy, and safety problems.

To Open Source or Not for Generative AI

On March 17, Elon Musk’s xAI released several components of its Grok-1 large language model (LLM) on GitHub under the open-source Apache License 2.0. Whether he did this for competitive or other reasons, Grok-1 joins Meta’s Llama 2, Salesforce’s XGen-7B, Mistral’s 7B and 8x7B, and other open-source LLMs available for research and further development.

Any time someone open sources a new class of software, the usual questions come up. Is it more or less secure than closed source? Can I get support for it or will my use of it be DIY? What happens if bad people get the source?

These are all pertinent questions for open-source generative AI models. They are also questions with years of code and experience that help answer them, especially regarding security. We must now do the same for ethical, privacy, and legal concerns with open-source AI code. We will know more if researchers and developers can access good code, find solutions, and share them. That sounds simplistic, but it is true. Code talks, and we need to learn from doing. History shows that a good mix of open- and closed-source provides a good balance for innovation.

Key Takeaway: Put Ethics, Privacy, and Responsibility First While Developing and Using Generative AI Models

The panel was a good and thoughtful discussion. It was not the first on ethics and AI and it will not be the last. Collaboration among all stakeholders in the generative AI ecosystem is mandatory to find workable and sufficient solutions to the speakers’ concerns.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author is a former IBM employee and holds an equity position in the company. The author does not hold any equity positions with any other company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Quantinuum Announces Breakthroughs for Quantum Computing Scale-Up

Quantum in Context: Pasqal Is the Latest to Publish a Roadmap

Quantum in Context: Rigetti Q4 2023 Earnings and Other Numbers

Author Information

Dr. Bob Sutor

Dr. Bob Sutor has been a technical leader and executive in the IT industry for over 40 years. Bob’s industry role is to advance quantum and AI technologies by building strong business, partner, technical, and educational ecosystems. The singular goal is to evolve quantum and AI to help solve some of the critical computational problems facing society today. Bob is widely quoted in the press, delivers conference keynotes, and works with industry analysts and investors to accelerate understanding and adoption of quantum technologies. Bob is the Vice President and Practice Lead for Emerging Technologies at The Futurum Group. He helps clients understand sophisticated technologies in order to make the best use of them for success in their organizations and industries. He is also an Adjunct Professor in the Department of Computer Science and Engineering at the University at Buffalo, New York, USA. More than two decades of Bob’s career were spent in IBM Research in New York. During his time there, he worked on or led efforts in symbolic mathematical computation, optimization, AI, blockchain, and quantum computing. He was also an executive on the software side of the IBM business in areas including middleware, software on Linux, mobile, open source, and emerging industry standards. He was the Vice President of Corporate Development and, later, Chief Quantum Advocate, at Infleqtion, a quantum computing and quantum sensing company based in Boulder, Colorado USA. Bob is a theoretical mathematician by training, has a Ph.D. from Princeton University, and an undergraduate degree from Harvard College.

He’s the author of a book about quantum computing called Dancing with Qubits, which was published in 2019, with the Second Edition released in March 2024. He is also the author of the 2021 book Dancing with Python, an introduction to Python coding for classical and quantum computing. Areas in which he’s worked: quantum computing, AI, blockchain, mathematics and mathematical software, Linux, open source, standards management, product management and marketing, computer algebra, and web standards.

SHARE:

Latest Insights:

Frank Geraci, President at Cronos, joins David Nicholson to share his insights on Huddle, a groundbreaking Smartsheet solution set to redefine configuration management, version control, and the use of Smartsheet portals.
Cicero, Director of Product Marketing at Smartsheet joins David Nicholson to share his insights on ENGAGE 2024. Discover the groundbreaking announcements and the unique energy that makes ENGAGE an unmissable event.
Jennifer Stockton and Courtney Finger share how Smartsheet transformed Conga's marketing operations from "chaos to collaboration," highlighting the pivotal role of Smartsheet in streamlining processes and enhancing creativity at scale.
Amilcar Alfaro, Sr. Director, Product Marketing at Smartsheet, joins Keith Townsend to share insights on the crucial updates from ENGAGE 2024, emphasizing the value of enterprise-grade scale and the platforms' user-friendliness.