Implementing ethical AI oversight while regulations lag behind

Listen to article
14 minute read

Imagine receiving a call from your child’s school saying your child has been caught cheating on an assignment. As you sit in the school office, your child’s English teacher explains that your child cheated by using ChatGPT—a popular online artificial intelligence (AI) language generator—to write a paper. When you ask the teacher how she knows, the teacher hands you a paper filled with phony quotations from books that don’t exist. “Of course,” you say to yourself as you read it, “the 18th-century British novelist Jane Austen did not fight at the Battle of Pearl Harbor.” But the AI said she did, so into the paper it went.

This is just one example of what those who study AI call a “hallucination”—a confident but inaccurate response to visual or other data inputs. Hallucinations have gained recognition recently in so-called “generative” AI systems like ChatGPT because of the popularity and accessibility of these models online and the odd and occasionally hilarious whoppers of misinformation they produce. But hallucinations are by no means limited to generative AI and certainly not just to text chatbots. Other types of AI are susceptible to hallucinations and equally confident in their inaccurate responses.

Imagine that instead of a text-generating AI like ChatGPT making up false facts about English novelists, you are the unfortunate victim of another AI hallucination. What if the AI in a self-driving car confidently “recognizes” a stop sign as an empty night sky and speeds through an intersection you happen to be driving through? Or what if an AI designed to assist a radiologist in identifying anomalies on a scan confidently assesses what is actually a malignant tumor as benign?

While there is no universally agreed-upon definition of AI, it can be broadly defined as technologies that “enable machines to carry out highly complex tasks effectively—tasks that would require intelligence if a person were to perform them.”[1] More simply put, AI enables a machine to do something that usually requires a human brain: writing a paper, driving a car, or diagnosing a disease. Of course, as we all know, confident humans can also make mistakes when doing these tasks and sometimes those mistakes can be dangerous. But thankfully, regulations and industry standards exist to mitigate the risk of such mistakes—such as licensing and vision testing for human drivers and medical education and licensing. Compliance with these regulations and standards mitigates risks, prevents costly damage, and saves lives.

But what if there were no regulations? What if instead of regulations, there were only voluntary guidelines suggesting who can drive a car or diagnose cancer—but at the end of the day, it’s up to you whether you want to follow them? Many of us would not feel very safe leaving the house in a world like this; however, this is exactly the landscape for much AI development today.

This document is only available to members. Please log in or become a member.