Medicare Will Start Paying AI Companies a Share of Any Claims They Automatically Reject

Share This Post


It’s long been the practice of private health insurers to require “prior authorization” before you can get the treatment you need. Often, they’ll try to deny as many of these claims as possible — including with the use of AI models

Government-backed plans like Medicare, however, have tended to cover what private insurers don’t, and without the laborious application process.

But that could be poised to change. The Centers for Medicare and Medicaid Services said it’ll experiment with its own version of prior authorization by using AI models to screen claims, as part of its new belt-tightening program called the Wasteful and Inappropriate Service Reduction model, or WISeR.

And get this: as the New York Times reports, the AI companies selected for this experiment will get paid a share of the money they saved by blocking people from their healthcare. Those savings could look like billions of dollars over the next six years — and that’s if the program doesn’t get expanded.

This basically deputizes the AI companies as a “whole new bounty hunter,” David A. Lipschutz, the co-director for the Center for Medicare Advocacy, told the NYT. In other words, they’re clearly incentivized to deny claims above all else.

For now, the program, which begins in January, will be limited to six states: New Jersey, Ohio, Oklahoma, Texas, Arizona, and Washington. Officials assert that the AI tools will only be used to judge claims for about a dozen different types of procedures it deems to be wasteful and providing little benefit, including steroid shots to relieve pain, per the NYT. Moreover, the models won’t be used to review emergency services and hospital stays, the director of the Center for Medicare and Medication innovation Abe Sutton said.

Experts aren’t convinced that it’ll stay that way for long. “You’re kind of left to wonder, well, where does this lead next?” Vinay Rathi, an Ohio surgeon and an expert in Medicare payment policy told the NYT. “You could be running into a slippery slope.”

Officials also assure that the final decision on denials — or in the industry parlance, “non-affirmations” — would be done by an “appropriately licensed human clinician, not a machine,” and Sutton said that there’ll be penalties for wrongful rejections.

The conduct of private insurers, however, suggests that keeping humans in the loop isn’t a safeguard against rampant abuse. A 2023 ProPublica investigation found that Cigna used an algorithm to automatically examine and deny claims, which were then approved by human doctors who didn’t even look at the patient records. United Healthcare allegedly used an AI algorithm called “nH predict” to reject claims, too, even though it allegedly had an error rate of over 90 percent.

Even disbarring any possible nefarious intentions, is there evidence that the AI models will be any good at being objective arbiters of who gets to see a doctor and who doesn’t? As the NYT explains, these systems typically look at a patient’s history to see if they meet an insurer’s criteria, such as having tried physical therapy first before getting back surgery.

But recent research has shown how AI systems can erroneously tell patients not to seek medical care because of something as insignificant and random as a single typo in a document. The work even found that the models exhibited an alarming gender bias, telling women not to see a doctor more often than with men.

Don’t worry, though. We’ll sure all this will get ironed out as the AI models are prematurely empowered to decide whether you get healthcare or not, while their owners get paid based on how many people they reject.

More on AI: The TV Show Host Who’s Now In Charge Of Medicare Wants To Replace Doctors With AI



Source link

spot_img

Related Posts

spot_img