What artificial intelligence bias can teach us about holistic product design

Dylan Doyle-Burke is CEO of Radical AI LLC in Denver, CO.

twitter.com/dylandoyleburke

Since its most recent boom in 2012, artificial intelligence (AI) has captured the imaginations of corporate industry and society at large. Technical developments in the field have evolved quickly, with companies racing to corner the market on exciting new technology and capability. Unfortunately, as a side effect of this speed of progress, the pace of technical development soon largely outpaced the ethical conversations needed to provide intentional moral structure to AI product design.

Particularly on the issues of privacy, fairness, bias, accountability, and transparency, the rapid development of AI technology has caused the field of AI technology development to fall into several high-profile ethical crises. The most notable of these crises was a ProPublica feature on machine learning software used in police systems across the country to assign a recidivism value to those who were incarcerated that would then be used by a judge in sentencing. Through presenting rigorous evidence, ProPublica argued that the prediction of future criminality that such software provided was largely incorrect and was definitively biased against black offenders.[1] The ProPublica feature spread through the media zeitgeist, and the field of AI was forced to begin to reflect on what to do with its bias problem.

This document is only available to subscribers. Please log in or purchase access.
 


Would you like to read this entire article?

If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

* required field