On April 5, 2023, the Centers for Medicare & Medicaid Services (CMS) published an expansive final rule amending Medicare Advantage (MA) regulations related to, among other things, coverage criteria and prior authorization.[1] Publication of the final rule was applauded by healthcare providers, which for years have expressed frustration about MA organizations (MAOs) inappropriately delaying and denying coverage for medically necessary care.[2] The rule took effect on June 5, 2023, with the provisions in the final rule applicable to coverage beginning January 1, 2024.[3]
On February 6, 2024—nearly a year after the final rule was published—CMS issued a set of FAQs to clarify how it expects MA plans to comply with the new regulations.[4] This article provides an overview of the guidance set forth in those FAQs.
MAOs cannot rely solely on AI tools in making determinations
Under the final rule, MAOs must comply with national coverage determinations (NCD), local coverage determinations (LCD), and general coverage and benefit conditions set forth in traditional Medicare laws.[5] In preamble commentary to the final rule, CMS explained that MAOs must ensure that they are making these determinations “based on the circumstances of the specific individual, as outlined at [42 C.F.R.] § 422.101(c), as opposed to using an algorithm or software that doesn’t account for an individual’s circumstances” (emphasis added).[6]
In the FAQs, CMS clarified that MAOs may use algorithms or other forms or artificial intelligence (AI) to assist in making coverage determinations, but only if the algorithm or AI “complies with all applicable rules for how coverage determinations by MA organizations are made.”[7] Specifically, CMS explained that “compliance is required with all of the rules at [42 C.F.R.] § 422.101(c) for making a determination of medical necessity, including that the MA organization base the decision on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient’s medical history, the physician’s recommendations, or clinical notes would not be compliant with § 422.101(c).” Thus, while an MAO may use an algorithm to predict a patient’s potential length of stay in deciding to terminate a patient’s post-acute care stay, “that prediction alone cannot be used as the basis to terminate post-acute care services . . . the patient must no longer meet the level of care requirements needed for the post-acute care at the time the services are being terminated, which can only be determined by re-assessing the individual patient’s condition prior to issuing the notice of termination of services” (emphasis added).
Referencing the nondiscrimination requirements of the Affordable Care Act, the FAQs also note concern that “algorithms and many new artificial intelligence technologies can exacerbate discrimination and bias” and suggest that MAOs “prior to implementing an algorithm or software tool, ensure that the tool is not perpetuating or exacerbating existing bias, or introducing new biases.”
The alleged use of AI in making coverage determinations is the subject of pending litigation against two payers—UnitedHealth Group and Humana—in two separate lawsuits filed late last year.[8] The complaint against Humana alleges that Humana uses an AI tool, nH Predict, “to predict how much care an elderly patient ‘should’ require but overrides real doctors’ determinations as to the amount of care a patient in fact requires to recover. As such, Humana makes coverage determinations not based on individual patient’s needs, but based on the outputs of the nH Predict AI Model, resulting in the inappropriate denial of necessary care prescribed by the patients’ doctors.”[9] The complaint further alleges that “Humana’s implementation of the nH Predict AI Model resulted in a significant increase in the number of post-acute care coverage denials.” Raising similar allegations, the complaint against UnitedHealth Group claims that the “nH Predict AI Model spits out generic recommendations that fail to adjust for a patient’s individual circumstances and conflict with basic rules on what Medicare Advantage plans must cover.”[10]
Though they are in the very early stages, these lawsuits demonstrate that, while Congress continues to debate how to regulate AI across various industries, such technologies are already being deployed in the healthcare space.
The lawsuits also come at a time when providers are seeing an increase in MA denials. A recent report published by the American Hospital Association and Syntellis identified a 55.7% jump in overall revenue reductions related to MA denials from January 2022 to July 2023.[11] By contrast, denial-related revenue from commercial payers also rose, but by just 20.2% over the same period.