Hey AI, tell me about privacy in healthcare and research

Listen to article
15 minute read

General use of artificial intelligence (AI) became available through OpenAI’s introduction of ChatGPT—a chatbot—on November 30, 2022. This led to broader public adoption of the technology, quickly reaching 100 million users in two months.[1] However, the quick uptake and pace of AI development had led to concerns and calls for global generative AI regulation by the CEO of ChatGPT in 2023 during congressional testimony.[2]

Additionally, over 1,000 technology leaders and researchers called for a pause to advanced AI development, citing risks.[3] Despite AI’s rapid development and use, risks may still be unknown; therefore, guardrails through regulatory frameworks may be required.

The EU has already been leading efforts to develop the world’s first comprehensive AI regulatory framework through its AI Act as part of its digital strategy. This framework was proposed by the European Commission in April of 2021 and is centered on the development and use of AI classified by risk to the health and safety or fundamental rights of a person.[4] The EU AI Act—recently passed by the European Parliament in March 2024—is anticipated to be in force by mid-2026.[5]

The U.S. government has also taken some preliminary steps to address AI by publishing a draft blueprint for an AI Bill of Rights outlining five principles and associated practices to promote trustworthy AI. This includes privacy standards and rigorous testing before AI becomes publicly available.[6] President Joe Biden also issued an Executive order on Safe, Secure, and Trustworthy Artificial Intelligence on October 20, 2023.[7] The directive serves to promote new safety and security standards while protecting privacy and advancing equity and civil rights, among other aims. As calls for regulation grow, the EU and U.S. announced a collaborative effort to develop a voluntary AI code of conduct to harmonize practices, set standards and principles for AI development and governance while regulations are developed and work their way through legislative processes.[8]

In the U.S. healthcare industry, the use of generative AI presents not only many opportunities but also risks if not carefully vetted, implemented, and monitored. One of the major risks of using AI in the healthcare industry that compliance and privacy professionals and businesses must pay attention to is the potential for privacy violations of regulated data. Additional concerns include security, protection of intellectual property and proprietary information, and ethical concerns.[9] As compliance and privacy leaders in healthcare and academic research settings assess risks, policy gaps, and develop future work plans, the following are some potential considerations for AI.

This document is only available to members. Please log in or become a member.
 


Would you like to read this entire article?

If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

* required field