Right now, in your organization—regardless of size, industry, or location—some of your coworkers are using artificial intelligence (AI) systems like ChatGPT to produce their work outputs. They are using it for:
-
Writing first drafts of outputs like marketing and sales emails
-
Creating or editing images for presentations
-
Updating Excel sheets and databases with the most recent monthly sales data
Of course, new software tools are used constantly. What sets so-called generative AI apart is its ease of use and access; your employees can sign up and start using ChatGPT for free, without permission from their manager or anyone in compliance. And these tools are so powerful, and advancing so much each week, that more and more work will be done in generative AI systems.
Unfortunately, there’s no free lunch. As we will see, what makes these systems uniquely powerful also raises some red flags that compliance professionals must consider.
The challenges of generative AI in the workplace
You have probably heard that AI systems have some rough edges; an industry of experts and government officials has sprung up overnight to remind us! But what exactly are some of the problems with AI systems?
Bias
“I’ll just ask an algorithm to compare these resumes against our past hires.”
From speaking to many audiences in the last year, I can say that if one concern about AI is well-known, it’s the possibility of bias. This can take several forms, but the most commonly known (and probably most dangerous) is in the hiring process. AI systems are trained on data—to the extent that past hiring patterns reflect intentional or unintentional bias, AI outputs may do the same. Human resources (HR) teams must remember they are still responsible for a fair hiring process, even if an algorithm is helping them make the decisions.
Note: From working with a dozen AI vendors in the HR space, I can say that each has been dedicated to using AI to reduce bias in hiring.
Intellectual property protection
“I’ll ask AI to sort through all our marketing assets to create the new presentation.”
If you have followed AI news at all, you may have heard that major media outlets like The New York Times are suing AI providers.[1] The issue is that copyrighted information, such as old news articles, was used to train the AI. To simplify a bit, that means AI outputs may be based on or include copyrighted or protected information.
When your employee voluntarily uploads protected intellectual property or trade secrets, it’s critical to know whether that system can or does retain that information and use it for future training.
Data privacy
“I’m behind on uploading these medical notes. I’ll ask AI to summarize them for me.”
Especially in highly regulated industries like health, finance, and insurance, uploading private or personally identifiable information to an outside system that may or may not have the same privacy controls as your internal systems pose a major risk.
Output accuracy
“I had AI research these new laws for me. I’ll put them right in the briefing.”
Just behind bias, the possibility that AI outputs are sometimes simply incorrect is the second-best-known roadblock with AI. Commonly called hallucinations, these incorrect outputs occur for a wide variety of reasons. Of course, AI systems will likely get more accurate with time, but given the high stakes in industries like law, relying on any AI output without checking can be risky. A New York lawyer was recently sanctioned after submitting a brief with made-up cases generated by AI.[2]