ChatGPT, Bard, and other artificial intelligence (AI) technologies will affect many industries and revolutionize the way we work. To name a few:
-
AI can generate automated customer service responses by answering FAQs.
-
AI can generate unique content for marketing campaigns like social media or email.
-
AI can aid students with learning by using digital tools for research.
-
AI can help legal professionals draft legal documents, contracts, and communications to clients, opposing counsel, and courts.
-
AI can help human resources departments by improving the onboarding experience for new hires, scheduling training sessions, answering common questions on insurance/payroll/benefits, or identifying potential candidates who would be a great fit based on (industry) experience or skill set.
Does this mean that ChatGPT can be used without limitations? To answer this question, it is important to understand the regulatory framework around the use of AI.
What is the regulatory framework for the use of AI?
The use of AI is defined in several laws, directives, and guidelines. Here are some of the most significant.
Artificial Intelligence Act
The Artificial Intelligence Act is based on several principles.[1] The use of AI technology must be in line with European Union (EU) values and fundamental rights.[2] The charter of fundamental rights of the EU defines the universal values of human dignity, freedom, equality, and solidarity; it is based on the principles of democracy and the rule of law. This contains principles on nondiscrimination and gender equality.
AI technology must comply with existing General Data Protection Regulation (GDPR),[3] EU Data Governance Act,[4] EU strategy for data,[5] and AI Liability Directive.[6]
The Artificial Intelligence Act applies a risk-based approach by defining AI services that create unacceptable, high, and low or minimal risks. AI services that materially distort a person’s behavior in a manner that causes (or is likely to cause) that person or another person physical and/or psychological harm; that exploits vulnerabilities of a specific group of persons due to their age, physical disability, or mental disability; or that evaluate or classify the trustworthiness of natural persons over time based on social behavior or known/predicted personality characteristics with the social score leading to detrimental or unfavorable treatment of certain natural persons/groups are forbidden.
AI systems also have a transparency obligation. AI systems shall be designed and developed in such a way that their operation is sufficiently transparent to enable users to interpret the systems’ output.
AI systems should have human oversight. High-risk AI systems should be designed and developed in such a way that natural persons can effectively oversee them during the period in which the AI system is in use, intending to prevent or minimize the risks to health, safety, or fundamental rights. The Artificial Intelligence Act defines the following requirements for high-risk AI systems:
-
A risk management system shall be established.
-
Governance over data to ensure data sets are relevant, representative, free of errors, and complete in view of intended purpose of an AI system.
-
Technical documentation exists.
-
AI systems shall be designed/developed to ensure their operation is sufficiently transparent to interpret system output.
-
AI systems shall be designed/developed so that they can be overseen by natural persons when in use.
EU guidelines on ethics in AI
The EU guidelines on ethics in AI, created in 2019, later resulted in the foundation for the Artificial Intelligence Act.[7]
GDPR
The GDPR was created in 2016 and went into effect in May 2018. The GDPR regulates the principles to be followed when processing personal data. For example, under GDPR (Article 22), a data subject has the right to object to automated processing of their data by a chatbot.
(Revised) Product Liability Directive
The Product Liability Directive (PLD), introduced in 1985, is a common set of rules enabling harmonization and an equal level of protection of consumers throughout the EU using the concept of no-fault-based liability (this means strict liability where producers are responsible for defective products regardless of whether the defect was their fault) for damage caused by defective products.
To be compensated under PLD, the burden of proof for the injured person consists of showing the product was defective, damage was suffered, and a causal relationship exists between the damage and the defective product.
The revised PLD sets a wider definition of product and clarifies that software (including software updates) must be considered a product in the scope of the directive. The revised PLD would apply if the defective product causes physical harm, property damage, or data loss. If manufacturers and software developers do not mitigate cybersecurity risks, then this lack of safety requirement must be considered by a court in evaluating whether a product is defective. The revised PLD also alleviates the burden of proof for victims under certain circumstances.
AI Liability Directive
One of the most crucial functions of civil liability rules is to ensure that victims of damage can claim compensation. By guaranteeing effective compensation, these rules contribute to protect the right to an effective remedy and a fair trial (Article 47 of the EU Charter of Fundamental Rights) while also giving potentially liable persons an incentive to prevent damage and avoid liability. With the AI Liability Directive, the commission aims to ensure that victims of damage caused by AI have an equivalent level of protection under civil liability rules as victims of damage caused without the involvement of AI.[8]
EU directive on unfair commercial practices
Directive 2005/29/EC regulates unfair commercial practices in business-to-consumer transactions. It applies to all commercial practices that occur before, during, and after a business-to-consumer transaction has taken place.[9]