Setting up a compliance program and realistically assessing risks is hard enough when a regulatory environment is known and laws establish boundaries. But today, in the context of new digital technologies—particularly those involving artificial intelligence (AI)—the job of a compliance professional has gotten much harder.
Every day, the news is full of stories about AI’s capabilities and shortcomings, calls for regulation, commitments by private companies to do the right thing, and so on. But what does that all mean in terms of concrete steps a compliance professional can take to protect corporate interests?
If you look up “compliance” and “best practices” in the context of AI, you will come up with a series of articles discussing the need for each of those things without necessarily describing a clear path to getting there. Words such as “accountability,” “transparency,” “trustworthy,” “fair,” “responsible,” and “ethical” are sprinkled liberally throughout. What those words mean, however, is less clear. And how they can be made actionable is frequently not addressed at all.
I am a former federal judge who presided over trials where a lack of compliance landed companies and people in hot water. Over the last several years—before ChatGPT and generative AI (often referred to as “GAI”) hit public consciousness—I have been advising companies on how to construct compliance programs in the face of many unknowns. Today, GAI’s transformative potential has placed AI front and center for every regulatory body any business deals with. In this article, I want to provide some practical advice on how to create basic compliance protections.
Know your tools
First, a key mantra—to be repeated early and often—is “know your tools.” No compliance program can be totally effective when no one within a company really has their arms around the tools containing AI capabilities. Every compliance group should find a method to have some form of database or Excel spreadsheet—whatever works for you—that lists all AI tools, provides a point person to contact about the tool, explains in language comprehensible to mere mortals what the tool is supposed to do, and then also indicates whether the tool needs to have an “impact assessment” performed and, if so, whether that has been done. There are several more fields that a document can include, but this is a minimum. This allows a compliance group to answer the basic question, “So what AI tools do we have?”
The next question a compliance group wants to answer is, “Do we have any AI tools or use cases we should be worried about or that deserve special monitoring?” Whether a tool or its use presents risks to the business is a case-by-case question. The only way to know the answer to this question is to have a list of the tools (see previous paragraph) and then ask the point person for that tool to provide an explanation of what it is being used for—that is, its use case. The array of use cases can then be evaluated for a variety of risks. Is there a risk of algorithmic bias if the tool is being used in any areas of human resources, credit and lending, marketing and advertising, healthcare, insurance, etc.? If bias is a possibility, then the tool needs to be evaluated. Its tires need to be kicked hard. This “kicking of the tires” is what in the AI area is called an “impact assessment.”
If a compliance group finds that an impact assessment should be performed, then there are questions as to (1) how often, (2) by whom, (3) according to what metric, and (4) reporting the results to whom and how.
Assess the risk of inaccuracy
Algorithmic bias, however, is only one risk compliance needs to look for. Another is accuracy: What is the risk that inaccuracy in a tool creates harm? Harm not because of bias (as previously discussed) but harm because acting on inaccurate results can have bad-to-disastrous consequences for a business. Imagine an AI tool used to assess compliance with certain reporting obligations but with an accuracy rate of only 80%. Under these circumstances, it would be useful to know what a comparison to human error rate for the same tool is, but it likely needs improvement before it gets deployed.
What makes GAI different?
GAI carries its own set of important compliance issues. But first, let me explain what it is and how it differs from other kinds of AI. There are many kinds of software programs that use AI and can be used for any number of narrow tasks. Think of narrow AI as a one-trick pony. GAI, on the other hand, is a whole stable of horses that can collectively do just about anything. GAI hit public consciousness when ChatGPT started to be used for schoolwork by fourth graders and when reporters broke stories about conversing with chatbots that responded in eerily human (sometimes creepy) ways. GAI can be used in a tool like ChatGPT, or it can be used as a base upon which specific tools for virtually any industry can be made (that is why a GAI model is sometimes referred to as a “foundation” model).
What should compliance do?
GAI carries particular risks for compliance groups. First, the apps that can be downloaded off the web onto personal digital devices and used to help people write emails, essays, or letters and create presentations, photographs, or music are not monitored by a company’s IT department in the same way that other software can be. If an individual user establishes their own account, it presents oversight issues. For businesses with record retention obligations, ensuring that usage and complying with such obligations are reconciled can be challenging. But in addition, there can be accuracy and confidentiality issues depending on the tool. It is useful to make sure that a GAI policy is disseminated to employees and that they are reminded that company computer policies have not been thrown out of the window just because GAI is new and exciting.
Compliance groups will also want to understand the extent to which reporting obligations may be affected by AI tools in use by a company or its competitors. There may be new risks to disclose in filings with, for instance, the U.S. Securities and Exchange Commission.
Enhancements to governance policies will also be an area that compliance will want to look into. What does the chief legal officer need to know about the AI tools in use? What kinds of reports should be made to the board? To the audit and compliance committee of the board? And is the board sufficiently knowledgeable about AI to be able to ask questions—including the right questions?
The list of questions a board might want to ask runs the gamut; here are just a few: Who is responsible for AI at our company? Do we know what AI is in use at the company? Have we assessed the risks of X tool or Y tool? Do we have a corporate policy on using GAI for company business? What investments are we making in AI? How are we assessing whether our use of AI is ethical?
Conclusion
No one knows what the full regulatory scheme for any industry will govern both the creation and usage of AI. But there are things that we know today that can already help guide us. First, existing laws still apply. Second, watch the regulatory horizon and pay attention to when new laws are being proposed and adopted. Third, help yourself by repeating the following mantra: know your tools, know your tools, know your tools.
Takeaways
-
Although the global artificial intelligence (AI) regulatory scheme is still taking shape, it is critical for companies to invest in building effective AI compliance programs today.
-
What are some practical steps that compliance groups can take to start? A key first step is to start with the mantra: “know your tools.”
-
To effectively assess AI risks, companies must know how their AI tools are being used and which existing legal obligations or company policies might apply.
-
Companies should also enhance their AI governance by enabling general counsel and the board to ask strong questions about how the company is managing AI risks.
-
By taking these steps, compliance groups can mitigate AI risks while providing the business a path to further invest in these important technologies.