The fast rise of artificial intelligence (AI) has accelerated board compliance and ethics reporting to unprecedented levels of importance. Generative AI has become a central focus for the board reporting agenda of chief compliance officers (CCOs)—particularly since the introduction of ChatGPT in November 2022. Board members now recognize the need to assess the ever-evolving AI landscape to fulfill their fiduciary obligations. As companies navigate the complex growth and regulatory landscape, it becomes imperative to establish a mutual understanding with boards of an organizational process required to develop risk frameworks that enable optimal utilization of AI.
The constantly changing risks associated with AI compliance and ethics present a challenge for CCOs, like navigating a new land for compliance and ethics board reporting. Upon closer examination, it becomes clear that existing compliance and ethics reporting tools can be repurposed to initiate this journey.
Simultaneously, the growing focus on compliance and ethics risks across industries and business areas has provided a unique opportunity for compliance and ethics teams to elevate their game. It has highlighted their significant roles in organizations and created momentum toward a critical inflection point for a long-awaited and well-balanced valuation of compliance risk management skills, technological implementation expertise, and organizational knowledge. The mandate for CCOs is now to maximally leverage and build upon the existing compliance and ethics expertise to effectively navigate the layers of AI and establish robust board oversight.
This article aims to delve into the practical aspects of AI compliance and ethics reporting, highlighting CCOs’ challenges and offering recommendations to maximize reporting effectiveness to the board. It focuses on the new aspects of AI compliance and ethics reporting and how they can be addressed by optimizing traditional compliance and ethics risk techniques and tools. However, it is essential to acknowledge that CCOs alone cannot resolve all AI controversies or unknown implications. The multidimensional challenge will require multilateral governmental and business collaboration for years.
The article reviews the top three compliance instruments CCOs can use to initiate systematic reporting on AI to the boards:
-
Mastering regulatory compliance for AI: The section focuses on the significance of building an agile legislative compliance and ethics baseline for AI and translating it into business controls that aim to optimally leverage the existing compliance and ethics frameworks.
-
Maximizing compliance and ethics integration in AI: The subsequent section discusses key compliance and ethics domains where substantial overlap with evolving AI compliance and ethics requirements can be observed and how it is crucial to understand risk shifts based on AI triggers across these areas.
-
Maturing AI roadmap for compliance teams: Lastly, it is fundamental for CCOs to actively participate in the AI discussion by developing an AI strategy for their teams. Within compliance reporting, it is necessary for board members to observe that CCOs are proactively addressing the potential of AI for their teams rather than waiting for external signals to initiate their own AI endeavors.
Mastering regulatory compliance for AI
Ensuring sustainable regulatory compliance for AI is the top priority for CCOs to mitigate risks and enable robust board oversight. Its absence will hinder compliance teams from providing fast and consistent guidance to business partners, which may have hefty implications for bringing forward swift AI solutions to their organizations and customers, ultimately determining who can succeed in this changed business environment. An ad hoc or fragmented approach will impose major risks on companies and make it challenging to have sound risk governance.
Benefitting from a baseline
To achieve sustainable regulatory compliance for AI risks, organizations must develop an AI legislative compliance and ethics baseline.[1] The baseline summarizes the key requirements of applicable laws, regulations, and standards, providing a unified view of regulatory compliance expectations across operating jurisdictions, companies, and their respective entities. Many organizations can leverage existing regulatory risk attestation processes and expand them to include AI risk controls as part of operationalization.[2] The baseline serves as a foundation for:
-
Keeping the board informed about the organization’s adherence to applicable laws, regulations, and industry and societal standards.
-
Supporting the board to perceive regulatory compliance as a journey with varying maturity stages and creating transparency around this agile approach.
-
Identifying compliance gaps, potential regulatory issues, and the principal measures to address them, including escalating material risks and any risk appetite issues to the board.
-
Allowing the board to assess the organization’s regulatory compliance state, ensure appropriate measures are in place, and address any compliance- or ethics-related resourcing or other oversight concerns.
-
Supporting board members in making reasonable risk governance and appetite decisions despite the changing legislative and regulatory landscape.
Conquering the evolving environment
The rapidly changing landscape of AI legislation and regulation is evident in the 37 AI-related bills passed into law in 2022 in 127 different countries, as highlighted by Stanford University’s 2023 AI Index.[3] This global trend is expected to continue, creating a constant wave of new standards and requirements. The selection of AI laws, regulations, and standards to be included in an organizational baseline will vary depending on companies’ geographical footprint and their willingness and ability to proactively adhere to important developments in AI regulatory frameworks at global, regional, and local levels.
It is encouraging, however, that companies and boards can draw from the knowledge gained through previous legislative and regulatory initiatives when shaping their baseline approaches. A prime example is data protection and privacy discussion globally. The implications of the European Union (EU) General Data Protection Regulation (GDPR) from 2018, with its extraterritorial scope, have demonstrated how powerful individual legislation can be on a global scale. The guiding principles of GDPR have been incorporated into local privacy and personal information frameworks worldwide.
Drawing parallels between the AI Act and GDPR
It is evident the EU will significantly influence legislative discussions on AI. In June, the European Parliament passed its version of the EU AI Act after nearly two years of deliberation. This paved the way for the final debate in the EU, with the target date to finalize the act by the end of 2023.[4] Similar to GDPR, the AI Act is expected to have an extraterritorial scope affecting companies outside the EU to varying degrees. This is projected to position the AI Act as a benchmark AI law that other jurisdictions may look toward when developing their own legislative initiatives.[5] In addition to the EU, Brazil and Canada are in the race to adopt a general law that applies to AI systems and are of great interest for baselining efforts.[6]
For baseline development purposes, it is vital for CCOs to start measuring any local or regional developments against the AI Act expectations and understand where it may stipulate a different standard and when it may make sense to adopt these emerging expectations well ahead of time. The board reporting will benefit from a robust overview of ongoing developments highlighting the laws of relevance, their expected in-force schedule, and their level of anticipated impact. The legislative baseline will offer the opportunity for CCOs to transparently report on the ongoing progress per regulatory area against the target maturity grade of AI, supporting an agile AI approach. To effectively address evolving AI legislative and regulatory proposals that apply to the company, it is paramount to keep board members informed and ensure alignment on baseline expectations.
Maximizing compliance and ethics integration in AI
Organizations face the challenge of effectively integrating AI compliance and ethics within existing frameworks to ensure robust board oversight and mitigate associated risks. Navigating the interplay between emerging AI controls and established compliance practices is dire in maximizing the potential of AI while maintaining compliance. AI compliance and ethics risks fall within the broader category of AI risks. CCOs are instrumental in collaborating with enterprise risk management and various stakeholders to establish comprehensive risk governance and coordination for holistic AI oversight. The overarching AI risk owner—typically a business representative—has the responsibility of overseeing all AI risks across products, tools, and services. CCOs are critical in ensuring that board reporting accurately conveys the placement of compliance and ethics risks within the overall AI risk domain.
Navigating the interplay of AI and compliance frameworks
Effectively managing the compliance and ethics considerations on AI requires organizations to carefully navigate the interplay between existing frameworks and emerging AI controls. As the legislative baseline for AI continues to evolve, it is vital for organizations to adopt an AI-by-design methodology similar to the privacy-by-design approach.[7] The AI-by-design approach refers to integrating AI controls and safeguards into the design and development of systems, products, or processes from the outset.[8] This approach not only mitigates risks but also maximizes the potential of AI in driving business growth and innovation.
Given the overlap of emerging AI legislative baseline controls with the existing compliance and ethics frameworks, every CCO must assess how to best leverage the organization’s current compliance and ethics frameworks when implementing AI. Aligning with and adhering to the company’s established principles, frameworks, policies, and guidelines will minimize disruption to business operations and ensure that AI deployment remains within the set risk appetite statements. This approach ensures that AI controls—in response to the emerging and existing legislation—are not seen as separate from existing compliance and ethics practices but rather as a natural extension and enhancement of them.
Harnessing compliance expertise for effective AI risk governance
Compliance teams possess the necessary tools and expertise to effectively report on regulatory risks in AI. Their role in building, implementing, and providing assurance for compliance frameworks equips them with the required knowledge to support boards in establishing robust AI risk governance. By developing comprehensive frameworks, aligning with the three lines of defense, embedding controls, and conducting monitoring and testing activities, compliance teams play a serious role in understanding risks and regulatory compliance state in the evolving landscape of AI. Their insights contribute to building a strong foundation for ethical and responsible AI deployment oversight for boards, placing compliance knowledge at the core of shaping the future of AI risk governance.
Identifying risk shifts and integrating AI controls
From a board reporting perspective, it is fundamental that compliance reporting encompasses the explanation of any changes in risk levels attributed to AI implementation. Several examples of compliance risk areas directly relevant to AI deployment can highlight the importance of integrating AI controls:
-
Code of conduct and ethics: Embedding AI considerations with the organization’s ethical principles and integrity standards on transparency, fairness, and nondiscrimination and discusses their implications on customer trust.
-
Whistleblowing: Establishing reports on AI-related concerns or unethical behavior as part of the reporting concerns framework and identifying underlying trends related to AI deployment.
-
Environmental, social, and governance (ESG) objectives: Aligning compliance efforts on AI with social and governance objectives and focusing on fair customer outcomes and ethical standards.
-
Data privacy and protection and records management: Addressing the protection of personal data and ensuring compliance with relevant privacy regulations during data collection, storage, processing, and usage, and establishing processes for the retention and disposal of AI-related records in accordance with legal and regulatory requirements.
-
Antitrust and competition law: Assessing potential antitrust implications of AI systems—such as safeguards to prevent collusion—and ensuring pricing algorithms comply with competition law.
-
Customer service and complaints: Safeguarding that AI systems provide fair and unbiased treatment to customers and addressing the impact of AI on service quality and learning from complaints related to AI.
-
Third-party risk: Ensuring compliance with AI-related contractual obligations and ethical standards when outsourcing AI functions and assessing the risks associated with AI solutions provided by third parties.
An effective board compliance reporting framework is a crucial mechanism for transparently communicating the integration of AI controls within existing compliance and ethics frameworks. By providing clear insights into aligning AI initiatives with the organization’s compliance risk appetite and ethical principles, board compliance reporting facilitates effective decision-making and accountability for AI risks.
Maturing AI roadmap for compliance teams
Compliance and ethics topics are increasingly under heightened public interest due to the deployment of AI across organizations and industries. Compliance teams must intensify their efforts to harness the power of AI and understand its capabilities for achieving greater efficiencies and effectiveness. Given the need for knowledge accumulation, it is necessary for all CCOs to spend time with their teams and AI experts to embark on their own AI transformation. Board members are eager to understand how compliance teams can automate their programs to effectively address AI-related compliance and ethics risks and beyond and plan to leverage AI to enhance risk management and governance.
Building on the annual compliance plan
To ensure that compliance internal needs are effectively incorporated into the AI discussion at the company level from the outset, the compliance reporting to the board should include a roadmap outlining the vision for internal use of AI and the desired outcomes. CCOs must ensure compliance AI roadmaps align with the overall enterprise AI strategy. It is recommended to develop a risk- and resource-based roadmap for AI implementation that becomes an integral part of the annual compliance plan that the board approves. To initiate an AI roadmap for compliance, it is advisable to first review all compliance risk universe domains and assess their suitability for AI integration along the entire compliance lifecycle—protection, detection, monitoring, and response. Subsequently, prioritization based on risks and resources can be implemented, enabling the identification of suitable candidates for AI compliance workstreams.
Demonstrating value
Obtaining agreement on the roadmap is essential as part of the compliance reporting to the board. Moreover, it is important to regularly review the progress against this plan and discuss any obstacles encountered along the way. Taking a proactive approach to AI deployment is imperative, as it offers considerable efficiencies and demonstrates improved risk control methods across compliance and ethics domains—which stakeholders highly value. Continuous learning and adaptation are required to navigate the evolving AI regulatory landscape and keep pace with emerging technologies. By effectively deploying AI in their own teams, compliance teams can not only mitigate risks but also contribute to the overall success of AI deployment within their organizations.
Conclusion
Effective AI compliance and ethics board reporting have positive implications for companies, stakeholders, and society at large. It offers a multitude of benefits, including strengthened risk management, increased customer and stakeholder trust, and the ability to influence the dynamic regulatory landscape. Successfully navigating the new land of AI compliance and ethics reporting to boards requires CCOs to leverage the existing tools and tell a holistic AI risk story. This involves mastering regulatory compliance for AI, maximizing compliance and ethics integration in AI, and maturing the AI roadmap for compliance teams.
As the AI landscape continues to evolve, it is imperative for CCOs and boards to remain agile, continuously refining their AI compliance and ethics reporting practices to meet new challenges and seize emerging opportunities. CCOs are critical in ensuring effective risk management in the AI era and can drive a real difference in value delivery. Each AI product, process, or tool requires compliance and ethics mastery to support sustainable and long-term AI-supported business growth.
By leveraging the recommendations outlined in this article, CCOs and boards can proactively navigate the new land of AI compliance and ethics reporting, ensuring effective risk management and demonstrating their commitment to a well-balanced and sustainable AI deployment strategy.
All opinions represented in this article are personal and belong solely to the author. Any such opinions do not necessarily represent the views of the author’s employer or any other persons, institutions, or organizations with whom the author may be associated.
Takeaways
-
Artificial intelligence (AI) has accelerated boards’ compliance and ethics reporting to unprecedented levels of importance.
-
Chief compliance officers (CCOs) play a vital role in ensuring effective risk oversight for AI and can drive a real difference for sustainable AI-supported business growth.
-
Existing compliance and ethics tools can be repurposed to effectively navigate the new land of AI in compliance and ethics reporting to boards.
-
Three instruments for reporting success involve mastering regulatory compliance for AI, maximizing compliance and ethics integration in AI, and maturing the AI roadmap for compliance.
-
The ever-evolving AI risk landscape provides CCOs and compliance teams a unique opportunity to demonstrate their value for risk governance in organizations and enable board oversight.