Review: AIQ – How People and Machines are Smarter Together

AIQ – How People and Machines are Smarter Together – Written by academics Nick Polson and James Scott, this is an introductory book of what is AI and how it works for an intellectual reader. The book makes several common AI concepts more sticky and real by

  • Explaining the math behind the concept with easy to understand examples
  • Tying the concepts to interesting historical characters and AI anecdotes
  • Linking back to current advances and examples in AI

Some of the concepts and examples that “stuck” with me that I hope will interest you in picking up a copy of the book:

  • Personalization translates to nothing but conditional probability. Conditional probability was the foundation to help predict survivability recommendations for bombers in WW2 as well as movie recommendations by Netflix today.
  • Making predictions from patterns has its foundation in deep neural networks. An astronomer used this approach to measure distances in early 1900’s and today Google’s Inception model with a 22-later deep neural network leads image recognition models.
  • The application of a Bayesian approach (remember probability theory?) is pervasive – for locating submarines in the 1960s as well as in self-driving vehicles today with SLAM (Simultaneous Localization and Mapping), or for predicting cancer in patients.
  • The evolution of communicating with machines Grace Hopper to Alexa, from programming to NLP, all possible with word vectors (a memorable tutorial of word vectors is included).
  • Anomaly Detection in the context of variability of an average, Isaac Newton’s mistake at Britain’s Royal Mint for failing to recognize the square root rule and proof that the Patriots may not have cheated on their suspicious coin-toss streak.
  • Preventable mischiefs in healthcare championed by passionate statistician Flo Nightingale that still prevail in modern healthcare due to barriers in incentives, data sharing and privacy.
  • Examples of poor assumptions that led to disastrous modeling results – including the epic failures of Google Flu Trends which suffered from model rust and COMPAS algorithms which suffer from “bias in bias out”.