What Is the Difference Between AI Ethics, Responsible AI, and Trustworthy AI? We ask our Responsible AI Leads

AI is everywhere—driving cars, diagnosing illnesses, making credit decisions, ranking job candidates, identifying faces, assessing parolees. These headlines alone should be enough to convince you that AI is far from ethical. Nonetheless, terms like “ethical AI” prevail alongside equally problematic terms like “trustworthy AI.”

Why are these phrases so thorny? After all, they’re just words—how dangerous can they be? Well, to state the obvious, words matter, and if we’re ever to achieve a future where AI is worthy of our trust then we at least need to agree on a common vocabulary.

To explain the differences between these terms and why they matter, we spoke to the co-chairs of the AI Ethics Advisory Board at the Institute for Experiential AI (EAI): Cansu Canca and Ricardo Baeza-Yates.

The Problem With “Trustworthy AI”

For Ricardo Baeza-Yates, who is also the director of research at EAI, it all comes down to a fundamental distinction between human and computational abilities. Artificial intelligence is not human, so we should avoid terms like “trustworthy AI” that not only humanize AI but also imply a level of dependability that simply does not exist.

“We know that AI does not work all the time, so asking users to trust it is misleading,” Baeza-Yates explains. “If 100 years ago someone wanted to sell me an airplane ticket calling it ‘trustworthy aviation,’ I would have been worried, because if something works, why do we need to add ‘trustworthy’ to it? That is the difference between engineering and alchemy.”

Cansu Canca, ethics lead at EAI, adds that “trustworthy AI” seems to direct the attention to the end goal of creating trust in the user. By doing so it circumvents the hard work of integrating ethics into the development and deployment of AI systems, placing the burden on the user.

“Trust is really the outcome of what we want to do,” she says. “Our focus should be on the system itself, and not on the feeling it eventually—hopefully—evokes.”

The Problem With “Ethical AI”

Ethical AI faces a similar problem in that it implies a degree of moral agency. Humans intend certain ethical outcomes. They can make value judgments and reorient their behavior to account for goals that do not translate to the world of algorithms.

“AI can have an ethical outcome or an unethical outcome,” Cansu says. “It can incorporate value judgments, but it’s not an ethical being with intent. It’s not a moral agent.”

Ethics, in that sense, is strictly the domain of human beings. Challenges emerge when people start to design systems with autonomous decision-making capabilities, because those systems are only as ethical as the intent of the people who create them.

Responsible AI

Ricardo and Cansu both prefer the term “responsible AI” while acknowledging that it, too, is imperfect. “Responsibility is also a human trait, but law has extended the concept of responsibility to institutions, so we use it in that sense,” says Ricardo.

“In a way, ‘responsible AI’ is a shorthand for responsible development and use of AI, or responsible AI innovation,” Cansu adds. “The phrase is still open to the interpretation that AI itself will have some responsibility, which is certainly not what we mean. We are trying to emphasize that responsible AI is about creating structures and roles for developing AI responsibly, and that responsibility will always lie in these structures and the people who design the systems.”

Cansu and Ricardo both see AI ethics as a component of responsible AI. Within that subdomain, we find the perennial ethical question, “What is the right thing to do?” And in the larger domain around it, we find room for innovation—an exploratory, interdisciplinary space for designers, developers, investors, and stakeholders that ultimately (hopefully) points towards an ethical core.

“We philosophers collaborate with developers and designers to find the ethical risks and mitigate them as they develop AI systems and design AI products,” Canca says.

Such is the mandate of the AI Ethics Advisory Board at EAI—an on-demand, multidisciplinary panel of AI experts representing industry, academia, and government. With philosophers and practitioners alike, the board serves to help organizations anticipate ethical perils without falling into the trap of thinking AI itself could ever have moral agency.

Find out how the AI Ethics Advisory Board helps organizations address difficult ethical questions during AI planning, development, and deployment.

AI ethics and responsible AI development

Watch Canca and Baeza-Yates talk more about responsible AI, AI ethics, and trustworthy AI in this fireside chat.

Are you looking to start an AI or data project? Do you need expertise in Responsible AI? Contact Us. And stay on top of fast moving AI trends: subscribe to our monthly newsletter!

FAQs

Q: What is the difference between AI ethics, responsible AI, and trustworthy AI?
A: AI ethics, responsible AI, and trustworthy AI are different terms that reflect different aspects of AI development and use. AI ethics refers to the ethical considerations and implications of AI systems, responsible AI emphasizes the need for responsible development and use of AI, and trustworthy AI highlights the importance of building trust in AI-based systems. While there may be some overlap between these terms, they each bring a unique perspective to the conversation surrounding AI.

Q: Why is it important to have a common vocabulary for discussing AI ethics?
A: Having a common vocabulary for discussing AI ethics is essential for facilitating meaningful and productive conversations about the ethical implications of AI. It allows stakeholders to communicate effectively, understand each other’s perspectives, and work towards shared goals. Without a common vocabulary, discussions about AI ethics can become confusing and hinder progress in developing responsible and trustworthy AI systems.

Q: Who is responsible for ensuring ethical AI development and use?
A: The responsibility for ethical AI development and use lies with a combination of individuals and organizations involved in the design, development, deployment, and regulation of AI systems. This includes researchers, developers, designers, policymakers, and regulatory bodies. Responsible AI requires a collaborative effort from all these stakeholders to establish ethical guidelines, address potential risks, and ensure that AI technology is used in a way that aligns with societal values and norms.

Conclusion

The terms AI ethics, responsible AI, and trustworthy AI are often used interchangeably, but they each bring a unique perspective to the conversation surrounding AI development and use. While trustworthy AI emphasizes building trust in AI systems, ethical AI focuses on the ethical considerations, and responsible AI highlights the need for responsible development and use of AI. By understanding these differences and working towards a common vocabulary, we can foster a future where AI is developed and used in an ethical and responsible manner.