Artificial Intelligence (AI) algorithms that are used for crime detection, loan approvals, and employee evaluations are considered by many to be objective, but they can sometimes have many of the same prejudices and biases that human evaluators have. Given the opacity of many black box approaches to AI, this could lead to serious problems with fairness and equity. This article discusses an admittedly imperfect approach by Microsoft to evaluate these AI algorithms using (surprise!) an AI algorithm. It flags situations where an algorithm appears to have problems with unfair differential treatments<U+00A0> based on race, gender, or age.

Will Knight. Microsoft is creating an oracle for catching biased AI algorithms. MIT Technology Review, May 25, 2018. Available at

This Recommendation was added to the website on 2018-06-28 and was last modified on 2020-02-29. You can find similar pages at Big data.

An earlier version of this page appears here.