John Rood (john@proceptual.com, linkedin.com/in/johnrood1/) is the founder of Proceptual in Chicago, Illinois, USA.
It has been amazing to see how artificial intelligence (AI) has, in roughly a year, become such an engaging and important issue in our society. What modern AI can produce often feels like magic. That said, as compliance professionals, we will be on the front lines of ensuring the AI revolution is managed safely and fairly. One of the key areas where AI will be regulated is human resources (HR) and hiring. As a starting point in the discussion, here are what I consider the three core pillars of safe AI development and deployment; you’ll likely hear a lot about these issues from thought leaders and government regulators.
First pillar: Transparency
Simply put, transparency refers to helping users of an AI system understand that it is in operation, what it is doing, and what data it collects. This makes sense; as a starting point to for fair use of AI, those whose job applications will be judged by the system must know what system is being used, how, and why.
We have yet to review meaningful current or draft AI regulations that do not have specific transparency or disclosure requirements. Specific to the HR context, regulations will likely require a hiring company to disclose to applicants that an AI system is in action. We generally also see requirements that job applicants be able to opt out of using the AI system, though in practice, we have not seen applicants opt out with any frequency.
Second pillar: Bias mitigation
If you ask an HR or compliance professional the first word that comes to mind when they think about AI, the answer is frequently “bias.” Bias in hiring is always a significant issue, but what does it mean in the AI context?
In the AI hiring context, bias refers to an algorithm using protected classifications to make hiring decisions. The prototypical example of AI bias in hiring happened at Amazon nearly 10 years ago.[1] They developed an algorithm to evaluate applicants for software engineering roles. At that time, those roles were overwhelmingly filled by men—so the algorithm started favoring men. Of course, the humans in charge told the algorithm to stop considering gender,” so the algorithm started choosing factors like “played football in high school” or “did not attend an all-women’s college”!
As the example shows, in no case with which I’m familiar was AI bias explicitly and purposefully programmed into a system. Instead, when AI systems are trained, they are generally trained with existing hiring data. When that data exhibits bias—as we have learned historically it often does—the algorithm itself can potentially replicate that bias.
Third pillar: Explainability
Explainability refers to human users’ ability to understand how and why an algorithm is making a decision. If a hiring manager asks an algorithm to choose between two applicants, it’s essential that the algorithm output explains the specific reasons why one candidate is chosen over another.
This goes back to the issue of bias. We are looking for algorithms that can show us that hiring decisions are based on acceptable differences in candidates—skills, years of experience, etc. If we are not able to get this information, human users don’t know whether the system is creatively routing around hiring regulations, as in the Amazon example above.
I frequently speak to audiences of HR leaders, and one of my first recommendations is simple: require each of your vendors to provide a nontechnical report alongside any algorithm recommendations. Any vendor who can’t provide simple reporting is likely not a good fit.
What’s happening with AI regulation now?
We are still quite early in meaningfully adopting rules around the use of AI. Recently, the Biden administration released a substantial Executive order on AI. The order got a lot of attention; however, most of the order has to do with deploying AI within the federal government. It requires many federal agencies to write and send reports to many other government agencies. That’s not a criticism but rather shows that AI regulation hype is ahead of meaningful action today.
That said, governments at every level (international, national, state, and, as we’ll see, municipal) are considering AI regulation. Compliance professionals should be ready.
Case study: NYC Local Law 144
In June 2023, New York City’s Local Law 144 of 2021 went into effect. Although other governments had regulated AI to some extent, Law 144 was seen as the first of a new generation of AI regulations in HR. This law applies to companies with offices and employees in NYC.[2]
Law 144 requires employers to conduct an independent audit of their use of certain “automated employment decision tools.” This audit must show how the automated tool in question makes hiring recommendations broken down along gender, race/ethnicity, and the intersection of those two factors. The employer must post the subsequent report “conspicuously” on their employment or careers websites.
Additionally, employers must post certain disclosures to job applicants regarding using automated tools and allow applicants to opt out and request an alternative hiring process. (You can see how the three pillars model gets put into action!)
As compliance has begun, we have observed that most employers are relying on the vendors of these tools to commission the required audits; vendors may indeed commission audits across several of their users if certain data requirements are met. As enforcement operations proceed, we expect to see AI tools vendors making independent audits of their systems a consistent requirement of operation.
What’s coming next?
NYC Local Law 144 has set a precedent that’s been followed by draft legislation in several jurisdictions, including California, New Jersey, Connecticut, and New York state. Each of these other jurisdictions is in the early phases, and the final requirements of their laws will change in the process. That said, a few themes are emerging:
-
Requirement for independent, third-party audit of AI systems: Similar to audit regimes in the financial world, regulators see the need for public-facing reporting on AI systems, particularly with regard to hiring bias. As with Local Law 144, it’s common for draft legislation to require audits to be completed by a third party (and explicitly not by the vendor).
-
Requirement for reporting on bias mitigation: Many laws will require vendors and/or employers to regularly produce reporting on what action they have taken to reduce bias. In some cases, this will be required to be available to the public and/or filed annually with regulators.
-
Requirements around employee tracking: Unlike the New York law, many draft regulations (including the federal “No Robot Bosses Act”) will have regulations around the use of AI in employee tracking.[3] Generally, acceptable tracking uses are required to be “strictly necessary” and collect as little employee data as possible to accomplish their aims.
Don’t forget about the EEOC
As exciting as it is to follow new regulations of world-changing technology, it’s important to remember that bias and discrimination in hiring are already illegal and have been for many decades! The U.S. Equal Employment Opportunity Commission (EEOC) has made it very clear that AI tools must adhere to the same regulations of hiring tools that have long been on the books (starting with Title VII). EEOC spokespeople often use the phrase “you can’t blame the algorithm”—making it clear that employers are on the hook for their hiring decisions, whether made from a paper-and-pencil skills test or a cutting-edge AI assessment.
As we move into 2024, we can expect to see several state laws take effect. As compliance professionals, understanding the ethical basis of those laws is a good start to helping our organizations stay compliant.
Takeaways
-
The three fundamental principles that constitute the structure for the ethical implementation of artificial intelligence (AI) in human resources and people operations are transparency, bias mitigation, and explainability.
-
As 2024 commences, we may anticipate the implementation of several state laws. Understanding the ethical foundation of these laws is a good starting point for us as compliance professionals in assisting our organizations in remaining compliant. Particularly in the area of human resources, AI is undergoing global regulation; compliance professionals must be ready.