Nakis Urfi (nakis.urfi@babylonhealth.com) is Product Compliance Officer in Dallas, Texas, Shawn E. Marchese (shawn.marchese@babylonhealth.com) is Global Head of Compliance and Risk in Austin, Texas, and Keith Grimes (keith.grimes@babylonhealth.com) is Director of Clinical Product Management in London for Babylon.
Imagine you are a leader at an enormously large healthcare organization, one where each functional area operates within its own silo: operations, client support, technology, etc.—especially technology. There is little transparency and a lack of communication between teams. Little to no documentation or shared frameworks. The services and the technology offered by the company are too large and complex for any single person or team to really understand how they all truly operate. Maybe you don’t have to imagine this at all; maybe this is where you are today.
Suddenly, reports start to emerge that your company’s technology is producing unintended consequences. Monitoring devices, software, and robots providing healthcare to individuals are not functioning as they’re meant to, causing negative patient outcomes, inaccurate advice, and overall confusion. No one at your organization knows where to start to identify and address the root cause of the problem, because the technology is everywhere across the organization, housed in a variety of software systems and cloud solutions. Not only that, but the organization you work for is integrated with other health systems, and shutting down the service completely would have tremendous and unpredictable impacts to an incalculably large population.
A nightmare? A cautionary tale? Some new sci-fi horror movie on Netflix? Maybe.
But this is a potential scenario that could occur in the near future. There are currently insufficient regulations in place to prevent such an occurrence, and by the time this hypothetical scenario becomes a reality, regulations may still not be sufficient to mitigate the potential risks. With any luck, it won’t happen. But if it does, it will likely trigger a host of new conversations and regulations on the part of industry thought leaders, elected officials, and regulators. Legislators will hold hearings to determine why it happened and what can be done to prevent its recurrence; industry will be called to account; and, finally, stricter oversight and regulation will be implemented that may have a chilling effect on innovation as a whole. Compliance professionals will study and discuss the case in conferences and magazine articles, discussing the many ways better oversight might have prevented the worst if only—if only—we had gotten involved sooner.
Why wait for the worst to happen?
What is artificial intelligence?
Before we get into questions of artificial intelligence (AI) oversight, it’s worth defining AI. However, there is no single, universally accepted definition of “artificial intelligence”—nor indeed even of “intelligence.” For our definition of AI, we will consider it as a set of advanced technologies that enable machines to carry out highly complex tasks effectively—tasks that would require intelligence if a person were to perform them. These include tasks such as decision-making, pattern recognition, machine learning, and natural language processing, which allow machines to operate more like the human mind.
AI is being used in more places than most people realize and can be used anywhere automation can occur. Media is full of advertisements for healthcare companies touting AI-powered capabilities.
AI can help bridge the gaps in care if used responsibly to meet the global demand of healthcare, which exceeds the supply of human brains available to meet it.[1] AI is being used in various ways in healthcare today, including predictive analytics, risk analyses, health recommendations, claims processing, clinical documentation, revenue cycle management, and much more.
Even closer to home for this audience, AI capabilities are making life easier for compliance teams, as well. AI can streamline compliance monitoring by helping mitigate the impact of false positives in exclusion screenings, enhance internal audit capabilities to make audits more data-driven and effective, and can even assist with regulatory change management—with natural language processing developing to a point where it can quickly identify changes in regulations. Finally, chatbots can help organizations answer simple questions and access documents such as policies and procedures.