In today’s rapidly changing digital landscape, artificial intelligence (AI) has emerged as a disruptive force with the potential to reshape entire industries and professions. Its ability to augment human capabilities, introduce intelligent automation, and analyze vast data sets has attracted many companies seeking to unlock new levels of innovation and efficiency.
Despite enormous opportunities, AI’s emergence has brought a slew of new risks and regulations to the private sector.[1],[2] Executives now face added complexity as they integrate new AI tools into existing business models and seek to balance potential benefits and threats. More than ever, an updated framework is needed to address risk management and ensure companies are prepared for the new era of digital innovation.
To combat threats in a compliant way, executives must embrace the enhanced and accelerated sharing of information across the organization, build awareness of new opportunities, and educate employees on cyber and related AI risks. Only by integrating transparency across the entire business function can leaders prepare their firms for what’s ahead.
While AI and other digital innovations offer immense promise for businesses, many firms have rushed to get involved without developing a comprehensive risk framework. Instead of acting immediately, an optimal strategy involves planning to address new and evolving risks thoughtfully and with agility while protecting employees and customers for the long term. Bringing together siloed teams into a broader compliance and ethics program is key to getting ahead of hidden risks and ensuring leaders are equipped with the necessary information to make informed decisions. Additionally, cybersecurity readiness must be a priority across the C-suite as cyber threats grow more sophisticated.
Rethinking the AI approach
For business leaders, the fear of missing out on potential AI benefits can lead to an accelerated adoption strategy that prioritizes speed at the expense of security, safety, and resilience. This failed AI approach often begins at the outset when executives neglect to examine potential opportunities through a risk management lens.
Decision-makers should begin their evaluation process by first asking, “What’s the real opportunity, and how do we define it?” Only when an organization has identified an opportunity or set of options can it identify and prioritize risks in terms of their short-term and long-term impact.
While unknowns abound in the AI space, most companies are not attuned to the novel perils of generative AI—a type of AI that can create new content or data in the form of text, images, audio, or code. Due to the intricate nature of prompt engineering skills required to avoid hallucinations and generate accurate results, most employees currently lack the skills to use these tools effectively.
To harness the full power of generative AI, executives should implement proper awareness and education, especially for consumer-facing employees. Training employees to write prompts that yield relevant, trustworthy results—which can generate quality data and new insights—should be top of mind for executives.
Despite the power of these tools, companies must also make their employees aware that generative AI can produce inaccurate responses, even if used correctly. Users often don’t take the time to carefully examine what’s being generated. This can lead to substantial risks around the quality and accuracy of the outputs and an additional risk for the person who receives misleading results.
However, the risks of generative AI extend beyond inaccurate results. Prompt injections, for example, are a type of attack that tricks a language model into producing unwanted or malicious output. This involves injecting malware or untrusted text into the prompt, resulting in the system delivering hate speech, false information, or other unexpected behaviors.
Currently, there are not many good strategies for mitigating prompt injection risks. This means that companies must devote additional scrutiny when choosing and testing opportunities and framing the utilization of the tools in the context of those opportunities, with a recognition that some risk will always be part of a cyber strategy.
A challenging regulatory landscape
As cyberattacks and data breaches intensify, governments have enacted new regulations, such as China’s new generative AI measures—which create a regulatory framework about where and how companies and people can use generative AI measures—to keep pace with the changing digital space, including six new guidelines and rulings worldwide related to AI, according to Dun & Bradstreet data.[3] Extensive AI legislation—such as rules around the disclosure of copyrighted material used to train generative AI models—has also been proposed in the EU in their EU AI Act.[4]
While these laws are being passed to protect companies, consumers, and the public, many businesses have already devoted extensive resources to solving digital threats and struggle to understand why more requirements are needed on top of their existing structure. This dynamic—and variations among country and regional approaches to breach security reporting—has challenged many executives.
Currently, the EU requires data breach reporting for organizations subject to the General Data Protection Regulation (GDPR). This standard generally requires a company to notify a regulator within 72 hours if they experience a data breach that leads to the destruction, loss, alteration, unauthorized access, or disclosure of personal data unless the firm in question has deemed the incident to be of no or unlikely risk to an individual.
While the GDPR is comprehensive, the drawback of this uniform strategy is that nearly every incident affecting users’ data in Europe must be reported to a regulator unless it has been determined to be of no or unlikely risk to individuals. Although companies choose to report relevant incidents, they often don’t have enough information to determine if the breach is significant; they only know that it’s not without risk. In the early stages, they also don’t have a full scope of the materiality or the details of their risk mitigation plan.
Instead of focusing on questions such as how the breach occurred, who is affected, and how to mitigate future risk, the time and energy of business leaders are often focused on providing a report in a three-day time frame. This process often shifts resources from focusing on containing, dealing with, or understanding the threat and can delay important root cause analyses, rapid remediation, prevention, and other key actions.
Much like the EU, the U.S. Department of Health and Human Services Breach Notification Rule under the Health Insurance Portability and Accountability Act (HIPAA) is a challenging regulation for U.S. organizations offering health plans to their employees and family members, as well as to other healthcare entities that are comprehensively regulated under HIPAA. The regulation’s complexity and the difficulty of knowing when an incident is significant enough to report places substantial burdens on American firms. And, for those that are public companies, since September, they have been facing new four-day disclosure considerations for material cybersecurity incidents under the final Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure rules.
While regulations have added to the complexity of the digital space and further complicated the role of AI in the workplace, the upside is that their demands on companies have forced executives to rethink their approach to interorganizational collaboration.
A holistic approach to risk management
Historically, various risk and compliance functions at a company—including privacy, technology, risk management, and ethics—were divided into siloed departments and overseen by different teams. Although new technological developments have increased the level of risk in the workspace, these tools—as well as the regulations that accompany them—have made business leaders acutely aware of the need for rapid and agile collaboration across an organization.
Unlike other past threats, AI and its accompanying regulatory framework have served as catalysts to bridge the gap between separate departments. With emerging cyber, data, AI, and external environmental, social, and governance reporting risks, organizations now realize that harmonization of requirements across these disparate functions is essential.
To create a comprehensive, unified approach that simultaneously addresses all these risks, executives should first ask: What are our current data, technology, and regulatory risks? And how can we facilitate better dialogue around these topics? Posing these questions forces leaders to come together and holistically discuss their current problems.
Cyber risk should be viewed through the lens of a broader compliance and ethics program. If an organization needs to use data in a new way or wants to apply an emerging technology, it needs to view the risks associated with the project through a common lens.
This shared framework should help produce an effective and holistic program that ties to regulatory and technical risk requirements. In the absence of this structure, it becomes difficult for leadership, as well as the board, to address the core areas that deserve their attention.
For example, if an organization uses its data to train an AI system, it should be aware of the potential inherent biases in its data sets and work to create a model that emphasizes fairness throughout the tool’s lifecycle. This will require teams with cyber expertise, which was traditionally a technology function; privacy expertise, which often was a legal function; traditional compliance and ethics oversight, which typically is a corporate compliance function; and risk management, which was typically separate from an organization.
Companies can effectively prepare for the new digital environment by coming together and rapidly exchanging critical information and insights. While firms cannot permanently eliminate the risk of cyber threats, reduce, slow the pace of regulations surrounding reporting and disclosures, or predict what technology will be developed in the future, they can emphasize transparency and promote employee education to ensure workers are well prepared to adapt to the risks ahead.
Takeaways
-
To comply with new regulations and changing landscapes around artificial intelligence (AI), executives must embrace the enhanced and accelerated sharing of insights across the organization.
-
Executives must help build awareness of new opportunities that are available with AI.
-
Compliance executives should educate their employees on the risks associated with AI and how different issues can impact them and the existing processes they manage.
-
Transparency is critical; implementing this into the day-to-day function of a business organization can better implement solutions and overall understanding of AI.
-
Do not rush to implement new AI technology without properly vetting and creating a compliance and ethics framework to understand the new technology.