Starting with the Problem
Working with Artificial Intelligence (AI) and Data Science (DS) over the past few years, the ethical use of this technology has always been on my radar (if you are new to the widespread nature of AI misuse, check out Cathy O’Neil’s insightful and easy read – Weapons of Math Destruction). I firmly believe in the power of AI to impact human life positively – however I feel uncomfortable in how AI is being put to work.
As a person with a strong product mindset, I always begin with the question “What is the problem to be solved” followed by “Why is that the most important problem to solve” and finally, “Are we going to make any money out of this or does that not matter“. When applying AI to real-world problems, I see a slightly different product approach being adopted.
- AI is the shiny new technology – let’s figure out how we can bring it into the product
- AI can automate workflows and processes – let’s map out the current workflow and then integrate AI into the process
Neither of these approaches sit well with me. In the first case, instead of using technology to solve a high value problem, we are looking at applying technology for the sake of applying technology. In the second case, we miss out on the disruptive impact possible with AI by re-imagining a solution and instead incrementally improve an existing solution.
Moreover, the bigger gap that is increasingly receiving international attention, is from the unintended consequences of applying AI for decision-making. Since we don’t approach the problem with an outcome mindset, the results being delivered are inexplicable, hidden, and fraught with issues of discrimination.
To understand this area of ethically and responsibly applying AI, I began my journey with exploratory research on the organizations, initiatives, and people who were passionately contributing to this mission. This is a rapidly changing, complex domain and as is the case with any new field, fraught with both fact and fiction. Join me in my journey as you read my posts!