Ethics

Ethical Principles Everywhere

One week into my journey of exploring ethical AI in practice, I know I have only scratched the surface of a potentially high impact domain with tremendous possibilities. AI is here to stay – the genie is out of the bottle. However, we are in the very early stages of defining how man and machine will collaborate within an ethical framework. While there are significantly more conversations on ethical AI happening today than ever before, there are few examples of applying ethical AI principles in practice. I believe this is a critical point of time where thoughtful adoption of AI with ethical constraints in consideration, will determine a successful integration of AI in our future.

I began my research to understand principles of ethics that are defined for AI. Instead of finding a set of guidelines/principles within a proposed ethical AI framework, I discovered hundreds of frameworks with their individual unique flavor of principles. Harvard’s Berkman-Klein Institute does an excellent job of providing a consolidated snapshot view of 36 most popular frameworks and outlining eight key themes that were surfaced from their analysis. To give you a sense of the breadth of themes covered, I am listing them out here: Privacy, Accountability, Safety and Security, Transparency and Explainability, Fairness and Non-Discrimination, Human Control of Technology, Professional Responsibility and Promotion of Human Values. The powerful visualization is a handy guide for a high-level overview of the frameworks and I refer to it often. The primary downside with this report is that it is a snapshot in time (as of January 2020) and hence already outdated. For a comprehensive inventory of frameworks and principles, I now use the database from AlgorithmWatch that provides a searchable interface (albeit, with limited filters) to quickly find a specific framework.

In addition to multiple frameworks that seem to be proposed weekly, there are several discussion groups that review and comment on the frameworks themselves. I had the opportunity to join Montreal AI Ethics Institute’s meetups where Mozilla Foundation’s RFC for Trustworthy AI and the Santa Clara principles for Content Moderation were brought up for discussion. I really enjoyed participating in a multi-stakeholder diverse discussion group of individuals, who are all passionate about responsible adoption of AI (including lawyers, philosophers, social scientists, human rights activists, data scientists, researchers, designers and academics and more).

As I learn more about the frameworks and principles, the question I will be asking time and again is – how are these being put to practice.