Printer Friendly, PDF & Email

An artificial intelligence code of ethics

Marianne M. Jennings (marianne.jennings@asu.edu) is Professor Emeritus, W.P. Carey School of Business, Arizona State University in Tempe, AZ.

Due to leg problems, compression stockings became my way of life. The sheer expense of these opaque stockings necessitated internet searches for the best prices, sites, selection, and colors. After ordering that first pair of 30–40 mg hose, ads for cremation, walk-in tubs, scooters, testosterone supplements, and canes rolled in, popped up, and obnoxiously entered into what was once a peaceful life. Someone somewhere assumed that the one singular purchase of support stockings meant that end-of-life and dotage products were just the ticket.

The companies and tech folks sold their data mining results on a woman who purchased Sigvaris compression stockings. Their analytics told them that they had hit pay dirt on a buyer seeking the comforts and treatments for old age and beyond. The precision, the ability to target, and the accessed contact information brought pride and, in other cases, perhaps, the sales of tubs and Poligrip, as well as prepayments for cremation. They were wrong in this case. Their analytics were not 100% on the money. In short, they got the wrong person. As the Monty Python folks would say, “I’m not quite dead yet, sir.”

The use of private purchasing data for whatever purpose without consent is ethically problematic. Incorrect assumptions about purchasers raise additional ethical issues, including everything from profiling to privacy concerns. Facebook folks either did not see these ethical issues or were perfectly comfortable with their solution—gather it and sell it. The uses of artificial intelligence (AI) are varied but consistently involve ethical questions. The ethical risks of facial recognition also involve forms of profiling. The complexity of ethical issues with driverless cars is boundless, taking us back to one of the age-old philosophical dilemmas: “Do I swerve and hit one person to avoid hitting five?” Or even the more frequent, “Do I swerve to avoid a deer, or do I hit the deer and put myself at risk of injury or death?” Who makes those decisions in developing the technology for driverless cars?

An AI code of ethics is no small task. Perhaps the easier task is to evaluate what has already been done and offer insights into what is missing and how to improve upon the efforts to date.

This document is only available to subscribers. Please log in or purchase access.