Ethics,  Tools

Driving Accountability for AI Applications

When referring to Responsible AI, Ethical AI, or Trustworthy AI, I often remind myself that it is not the technology that needs to be responsible, ethical or trustworthy. The humans who are defining, designing, developing and deploying the AI technology are ultimately accountable for anticipating and addressing ethical risks from the use of the technology.

So, in today’s business environment, how can humans be held accountable for AI decisions?

For the very highest levels of leadership with Boards of Directors of organizations, the World Economic Forum has released a handy AI Toolkit for Board Members in January 2020. The toolkit helps educate board members to ask the right questions and establish appropriate governance related to the impact and potential of AI in their company’s strategy.

For business leaders who want to create a culture of ethics in their organization, the IEEE’s Ethically Aligned Design (EAD) for Business is a great primer to get started. For a deeper lens into ethical design, development and implementation, check out the authoritative complete EAD document (version 2 is open for public comment now). To dive into ethical issues that are relevant for specific technologies and application areas, I recommend the latest Sienna Report, released this July 2020.

For teams that are building AI products, it is becoming increasingly easy to incorporate checklists during the product development life cycle. AI Factsheets from IBM Research address governance through transparency by capturing detailed information about models from a variety of stakeholders. The brand new Factsheets website (launched in July 2020) provides templates, examples and education on building factsheets by application. Ethics and Algorithms Toolkit is a resource primarily intended for governments and cities to evaluate and manage risk from algorithms but can easily be extended to business applications. If you are a data scientist who wants to do your part to incorporate ethical checklists during development, check out the simple and easy to integrate Deon.

I also investigated options for applying internal auditing frameworks to existing processes and two approaches surfaced in my research. The Z-Inspection process from the University of Frankfurt and the SMACTR framework from Google are recently published approaches that apply auditing governance across the end-to-end process. External consulting organizations such as PwCAccentureDeloitte and others also offer services to help drive accountability within your organization.

We can use the above approaches to drive accountability for AI applications that we are developing for applications. However, there are several questions being raised as to whether we need to rethink the purpose of AI in the longer-term, so that we are better serving humanity. In a McKinsey sponsored interview, Stuart Russell (an early AI pioneer and thoughtleader, author of Human Compatible), shares an interesting perspective on the crux of the problem with “building machines that do the right thing to meet their objectives”. He concludes that it is impossible to specify these objectives completely, correctly, consistently and fairly all the time (the objective function is not defined and always dynamic). He recommends rethinking the purpose of AI instead to “building machines that benefit humans” using game theory as a potential solution approach. Tim O’Reilly reinforces this notion in his insightful essay for the Rockefeller Foundations report “AI+1: Shaping the Future“, concluding that the governance of AI is not about managing a stand-alone new technology but rethinking deeply how we govern our companies, markets and society.

We have several toolkits and frameworks to drive accountability for the short-term but driving accountability may be the wrong problem to solve in the long-term.