Technology’s exponential development and use in healthcare provides potentially significant benefits for behavioral health patients but also raises ethical and compliance concerns. The most recent technological advance involves the use of artificial intelligence (AI). Unfortunately, laws, rules, and regulations do not change as quickly as technology. Compliance professionals will want to keep in close contact with departments considering using AI—including behavioral health—for both ethical and confidentiality concerns. Mental health stigma is alive and well and can create issues for employment and other activities for those suffering from mental health conditions. When compliance collaborates with departments such as behavioral health, those concerns can be minimized.
The pandemic brought mental health issues to the forefront. Both the Biden administration and the Substance Abuse and Mental Health Services Administration (SAMHSA) have put forth plans to address identified issues. The Biden administration is working to improve insurance coverage for mental health, while SAMHSA is working to strengthen the release of information requirements, especially for substance use disorder. The use of AI will factor in both plans as a potentially cost-effective way to address mental health concerns.
So, what is AI? According to IBM, “Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-mailing capabilities of the human mind.” As more commonly understood, it is having computers “thinking” like humans.
According to research, AI appears to provide several improvements in treating mental health conditions. A study published in the Journal of Medical Internet Research found that AI “was associated with significant improvements in substance use, confidence, cravings, depression, and anxiety.” The authors believe that the benefits of AI are the ability to compare and analyze large amounts of data as well as increase “equity and access” to mental health treatment. One disadvantage identified is predictability with ethnic groups who may not have access to mental healthcare. Lack of access would lead to a lack of data for AI to analyze, making it less likely to accurately predict issues in that population.
Several studies on groups where data is available found a high level of accuracy in predicting suicidal thoughts as well as significant mental health issues. A Vanderbilt study found that with access to medical records information, demographics, and admissions information, AI had an 80% accuracy rate in predicting whether an individual would die by suicide. With all these benefits, it seems that AI should be pursued; however, at the same time, there are numerous ethical and privacy issues to be considered and addressed.
An excellent example of potential privacy and ethical issues is the Dinerstein v. Google case. In this case, Matt Dinerstein sued because his de-identified information was given to Google under a data use agreement as part of a research project. Google has access to information from other applications that would allow his information to be re-identified. As electronic records and health apps gather information on individuals’ health information, and smartphones can geolocate individuals, the ability to re-identify individuals becomes more of a reality and de-identification more of a myth.
Another ethical situation involved a study identifying that an organization used AI to provide counseling without telling patients. In this case, a mental health entity used a chatbot to provide treatment for 4,000 patients without informing them that a human was not providing that service. There is also concern that AI may be vulnerable to misuse—intentional or not—by changes in how the data is input into the system.
As previously mentioned, a gap exists between technological advances and regulations attempting to catch up with those rapid changes. The U.S. Food and Drug Administration (FDA) stated that the current regulatory process is “not equipped to handle the speed of change” which is needed for ensuring safety and effectiveness. Additionally, both the U.S. Department of Health and Human Services Office for Civil Rights (OCR) and SAMHSA have promised new regulations; however, several years later, none have been forthcoming. Compliance professionals can only hope that when new regulations are published, they will also address AI concerns. This lack of regulation leaves healthcare entities to figure out safety, ethical, and privacy issues on their own. It is crucial for organizations to address these issues. For AI to perform those functions, it must analyze data from multiple sources, including the patient’s medical record. This places a great deal more information about a patient in the electronic environment, which has proven vulnerable to attack by hackers. Ransomware is prevalent within healthcare due to the large amount of information available today. Credit card companies used to be the avenue of attack, but changes in regulation and practices have made it more difficult to access a person’s information. AI can increase not only the amount of information but also the number of patients’ information available, making it a likely further target of ransomware attacks.
Several ethical and legal concerns have been identified from studies performed by various researchers, including those from the World Health Organization (WHO). Most of these studies were conducted following the Trump administration’s executive order covering five areas related to AI. The idea behind the executive order was to increase the development and use of AI in healthcare. WHO identified several concerns with the use of AI and listed some ethical principles that need to be applied when developing and using AI. The concerns centered around data bias that could lead to inaccurate information being used by healthcare practitioners; information being provided to an end user could be erroneous but appear to be accurate and legitimate; and lack of consent for the use of patient information used in training an AI system.
The ethical principles include:
Promoting human well-being, human safety, and the public interest
Ensure transparency, explainability, and intelligibility.
Foster responsibility and accountability
Ensure inclusiveness and equity
Promote AI that is responsive and sustainable
In other words, anyone using AI should address the concerns identified, find ways to minimize or eliminate bad data or data being used inappropriately and start gathering a broader spectrum of data so that all populations are included in the benefits of AI.
Compliance and AI risk
How do compliance professionals help guide the ethical use of AI and identify ways to protect patient privacy? Initially, compliance should be involved in drafting policies to keep up with the rapidly changing technological environment in mental health. Compliance professionals can also provide education on the benefits and detriments of AI and how to avoid the pitfalls. The use or potential use of AI should be included in the annual compliance risk assessment and/or enterprise risk assessment. Monitoring and auditing should occur to ensure the ethical use of AI and the protection of patient information. From a privacy perspective, new de-identification methods may need to be created to minimize the ability to re-identify a patient using multiple sources of information. It will be important to monitor OCR regulations to see how they strengthen patient privacy protection and implement any new regulations as quickly as possible. Working with OCR and SAHMSA for realistic and beneficial methods and regulations should also be a priority. As compliance professionals, we are charged with ensuring ethical conduct of our healthcare institutions and protecting our patients’ information to the fullest extent possible. These steps will help us close the gap with the rapidly changing technological environment taking place.
Monitoring privacy of patient information is heightened with the increased use of technology—especially artificial intelligence (AI).
Current HIPAA regulations have not kept up with the changes in technology.
AI appears to be beneficial in treating certain mental health conditions.
Compliance professionals need to be involved with information security personnel as organizations look to implement the use of AI to ensure patient information is protected within and outside the organization.
Like many technological advances, AI holds promise—especially in the behavioral health setting—but also poses challenges for compliance professionals.