Artificial Intelligence Act: A European approach

Patrick Wellens ( is currently a Compliance Manager for a division of a multinational pharma company based in Zurich, Switzerland. He is a Board Member of Ethics and Compliance Switzerland and co-chair of the Working Group Life Sciences.

Artificial intelligence (AI) is a technology that mimics human intelligence to perform tasks and can iteratively improve itself based on the information it collects.[1] AI is used widely in many technologies and industries; some may notice, others may not. These include (but are by no means limited to): self-driving cars (automotive industry), making the diagnosis of certain diseases more accurate (healthcare industry), product search recommendations (e-commerce & marketing), chatbots (customer service), robotics process automation (manufacturing), facial recognition (defense industry), and talent acquisition (corporations).

AI optimizes operations and resource allocation and improves the prediction and analysis of large datasets. At the same time, AI can also create new risks or negative consequences for individuals and society. AI technology can be misused and provide powerful tools for manipulative, exploitative, or social control practices. Therefore, the European Union (EU) Artificial Intelligence Act defined a risk-based framework that differentiates AI systems with unacceptable risk, high risk, or low or minimal risk, and defined minimum standards that AI systems should comply with.[2]

This document is only available to subscribers. Please log in or purchase access.

Would you like to read this entire article?

If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

* required field