Table of Contents
Sascha Matuszak (sascha.matuszak@corporatecompliance.org) is a reporter at SCCE & HCCA in Minneapolis, MN.
Artificial intelligence (AI) is fast becoming an integral component of organizational operations. The ability to analyze and act upon big data is often the difference between success and failure in a competitive business environment, but also in the field of medical research and technological innovation.
There are, however, several ethical problems surrounding the use of AI. Basic ethical concerns were brought up more than 70 years ago by Isaac Asimov and his Three Laws of Robotics (http://bit.ly/2m7wKfx), and several computer scientists and ethicists have since tackled the issue. Studies by CapGemini (“Why addressing ethical questions in AI will benefit organizations” (http://bit.ly/2kJcjVI)) and work done by the High-Level Expert Group on Artificial Intelligence (http://bit.ly/2lSySaE), including the Ethics Guidelines for Trustworthy Artificial Intelligence (http://bit.ly/2lJrzlL), are considered gold standards for gaining an ethical perspective on the deployment and use of AI.
Another report (http://bit.ly/2kr6yfw), put together by Deloitte Insights, delves into the topic of ethical AI, suggesting a short list of ethical concerns and some solutions to those concerns for companies to consider. The short list of ethical concerns included:
-
Bias and discrimination in recruiting, credit scoring, and judicial sentencing;
-
A lack of transparency on how machines make decisions and how citizens’ private data is used;
-
An erosion of privacy that comes from using private data without disclosure and consent;
-
And a lack of accountability for who is responsible for mistaken or dangerous AI decisions.
The report also listed some solutions and methods to deal with the ethical concerns that AI presents. The report built upon previous discussions of ethics and AI, including hearings before the United States House Committee on Oversight and Reform and helped inform a guidebook for creating AI ethics committees. The four steps the Deloitte report recommends are:
-
Create a dedicated AI governance and advisory committee to engage with stakeholders on identifying core values, and to oversee ethical AI design, development, deployment, and use. Integrating ethics into AI requires learning the values held by customers, employees, regulators, and the general public.
-
Train developers to test and fix systems that encode bias and treat certain populations unfairly. Use antibias analytics tools that detect how data variables may be proxies for sensitive variables such as age, sex, or race.
-
Build public trust through transparency about the company’s use of AI. Companies should disclose the use of AI systems that affect customers, explain what data they collect, how they use it, and how customers could be affected by that usage.
-
Start advising employees on how AI may affect their jobs in the future. This could include retraining workers or giving them time to find new jobs.
Guide to creating an AI ethics committee
Consulting firm Accenture and the Northeastern University’s Ethics Institute collaborated on a guidebook, Building Data and AI Ethics Committees (http://bit.ly/2m8wHA9), that Thomas Creely, director of the ethics and emerging military technology graduate program at the U.S. Naval War College, called an excellent tool “to generate questions and initiate action.”
The guide begins by stating that the minimum responsibility for organizations dealing with big data and AI is to comply with legal regulations. But legal regulations are not enough for maintaining trust, ensuring organizational values, and meeting stakeholder expectations. Not only that, but legal guidance often “lags well behind technological innovation and organizational practices. As a result, the ethical issues and questions confronted by organizations and the people working in them often arise prior to the development of an adequate legal regime.”
It is important for companies to stay on top of the ball regarding AI, because adoption of the technology is proceeding at a breakneck pace and is showing no signs of slowing down.
“The danger is people equate compliance with ethics, and that’s dangerous thinking,” said Frank Bucaro, CSP, CPAE (http://bit.ly/2m9Kbvm). “Ethics is proactive, compliance is reactive—companies have to choose ethics as a basis for doing business. Regarding AI, companies must have an ethical foundation by which to discern, in addition to compliance, the purpose, development, and benefit of AI.”
AI and ethics in action: Facial recognition
Facial recognition technology is in its infancy and has demonstrated limited success so far. In several trials, the technology was unable to distinguish accurately between individuals—especially women and people of color—and several US municipalities have banned the use of facial recognition by law enforcement.
Nevertheless, polls (http://bit.ly/2kAOmQv) show that many Americans are willing to sacrifice privacy and ethical concerns in order to give law enforcement the ability to track individuals using facial recognition technology. But those results depend largely on demographics and how the questions regarding AI and its uses are posed. A separate poll (http://bit.ly/2kAOqjd) asked more pointed questions, referring to the actual uses of facial recognition technology as opposed to merely using the term “facial recognition,” and the results were overwhelmingly in favor of not using the technology. China, no stranger to oversight of its people, recently stepped back from facial recognition technology in schools (https://bbc.in/2lLglNs)—a sign that even traditionally oppressive regimes are thinking carefully about AI.
The polls show that most people do not have a solid grasp of what AI is capable of and what the consequences might be if the technology is rolled out without a serious consideration of the risks and ethical concerns.
Hearings before the House Committee on Oversight and Reform on facial recognition technology (http://bit.ly/2kAHJO0) came to much the same conclusion (http://bit.ly/2kDcv8Z).
AI is spreading fast, and there is currently no legal and ethical framework to balance out the risks the technology poses. Organizations that hope to be trusted users of the technology can make use of the Deloitte and Accenture sponsored studies to determine how they can ensure that their use of AI remains ethical and aboveboard.