Search
Close this search box.

OECD Released New AI Principles: How Will They Impact the Ethics of AI?

OECD AI principles
Getting your Trinity Audio player ready...

Do we need global standards for the humane and ethical development of artificial intelligence (AI)? Members of the Organization for Economic Cooperation and Development think so. The group, made up of 36 member nations, recently adopted its own list of OECD AI principles designed to guide multinational cooperation for responsible stewardship of AI in coming years. (Click here for a full list of OECD member nations.) While the guidelines are not legally binding, they do make a major statement regarding technology and the growing global awareness that AI is far too powerful to leave unregulated.

The following are the five main OECD AI principles agreed upon by OECD members, as well as Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania.

  • OECD AI Principle 1: AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being. For years, we’ve been discussing the ethics of AI here at Futurum. One of the biggest questions has always been how AI will impact the human workforce. The first principle sets a clear precedent that we as technology developers and world leaders must consider the human impact of AI development before we consider the business opportunity it could bring. As we’ve discussed before, AI has the power to cause unemployment, and to grow the gap between haves and have nots—especially on a global scale. That’s too important of an issue to go unspoken.
  • OECD AI Principle 2: AI systems should be designed in a way that respect the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society. We already know AI is not bias-free. Could it be? This principle seeks to at least illuminate the issue of governments and other entities potentially using AI to infringe upon human rights through discriminatory processes. It’s encouraging everyone to try to safeguard humans from that possibility, while also noting that AI should never be able to run rampant in society. Humans should always be able to intervene when necessary.
  • OECD AI Principle 3: There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them. This point goes back to the issue of transparency, privacy, and security. Does the public really understand the information being gathered on their behalf? Do they know how the algorithms will be used? Do they have any right to opt-out of data being gathered? Safe AI development cannot exist in an environment where the public doesn’t understand its implications.
  • OECD AI Principle 4: AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed. As we’ve discussed before, tech development isn’t a one-time thing, and neither is security. When we go about collecting data and teaching our systems to do something, we need to consider that it will be gathering and doing until we tell it otherwise. What are our plans to keep the data safe in the meantime? What are our consistent standards for scrubbing or dumping data? What are our plans for upgrading systems to ensure they aren’t compromised? All of these things need to be part of any smart AI discussion.
  • OECD AI Principle 5: Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles. This principle goes one step further in trying to pin responsibility for AI development on those creating it. After all, we can’t just develop self-learning technology and throw up our hands when it goes rogue. Those creating it need to know they’ll be held accountable for both the good and damage it may cause over time.

As I noted above, the OECD AI principles are not legally binding. There is no way to guarantee member nations or the technologists operating within them will adhere to the rules governing AI development. Still, it’s hopeful to see that so many nations in the world are coming together to recognize that ethical development of AI needs to be a priority now—before the technology gets too far out of the gate. After all, the changes we see from AI will most likely be irreversible, be in terms of job creation/elimination, automation, or robotic development. I commend the OECD team with compiling such a thoughtful list of directives to guide its development into 2020.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Adobe Reports Record FY2024 Revenue Driven by Strong Digital Media and Digital Experience Segments While Leveraging AI to Drive Innovation and Meet Analyst Expectations
Keith Kirkpatrick, Research Director at The Futurum Group, analyzes Adobe’s FY2024 performance. Growth in the Digital Media and Digital Experience segments contributed to record revenue while addressing challenges like the impacts of foreign exchange.
Matt Yanchyshyn, VP at AWS, joins Dion Hinchcliffe to share insights on the evolving cloud marketplace landscape, highlighting AWS Marketplace's new features and the impact of GenAI on business operations.
Avi Shetty, Sr. Director at Solidigm, joins Keith Townsend on Six Five On The Road, sharing insights on the indispensable role of high-density storage in powering AI advancements and the collaborative mission with Dell to lead in energy-efficient AI solutions.
Daniel Newman and Patrick Moorhead share insights on how NVIDIA, Microsoft, Qualcomm, OpenText, and Meta navigate AI-driven innovation, supply chain challenges, and market diversification.