Printer Friendly, PDF & Email

Oversight considerations for ethical artificial intelligence

Nakis Urfi (nakis.urfi@babylonhealth.com) is Product Compliance Officer in Dallas, Texas, Shawn E. Marchese (shawn.marchese@babylonhealth.com) is Global Head of Compliance and Risk in Austin, Texas, and Keith Grimes (keith.grimes@babylonhealth.com) is Director of Clinical Product Management in London for Babylon.

Imagine you are a leader at an enormously large healthcare organization, one where each functional area operates within its own silo: operations, client support, technology, etc.—especially technology. There is little transparency and a lack of communication between teams. Little to no documentation or shared frameworks. The services and the technology offered by the company are too large and complex for any single person or team to really understand how they all truly operate. Maybe you don’t have to imagine this at all; maybe this is where you are today.

Suddenly, reports start to emerge that your company’s technology is producing unintended consequences. Monitoring devices, software, and robots providing healthcare to individuals are not functioning as they’re meant to, causing negative patient outcomes, inaccurate advice, and overall confusion. No one at your organization knows where to start to identify and address the root cause of the problem, because the technology is everywhere across the organization, housed in a variety of software systems and cloud solutions. Not only that, but the organization you work for is integrated with other health systems, and shutting down the service completely would have tremendous and unpredictable impacts to an incalculably large population.

A nightmare? A cautionary tale? Some new sci-fi horror movie on Netflix? Maybe.

But this is a potential scenario that could occur in the near future. There are currently insufficient regulations in place to prevent such an occurrence, and by the time this hypothetical scenario becomes a reality, regulations may still not be sufficient to mitigate the potential risks. With any luck, it won’t happen. But if it does, it will likely trigger a host of new conversations and regulations on the part of industry thought leaders, elected officials, and regulators. Legislators will hold hearings to determine why it happened and what can be done to prevent its recurrence; industry will be called to account; and, finally, stricter oversight and regulation will be implemented that may have a chilling effect on innovation as a whole. Compliance professionals will study and discuss the case in conferences and magazine articles, discussing the many ways better oversight might have prevented the worst if only—if only—we had gotten involved sooner.

Why wait for the worst to happen?

This document is only available to members. Please log in or become a member.
 


Would you like to read this entire article?

If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

* required field