
Last week, while at the DMV waiting for my number to be called, the woman beside me turned and asked how long renewals usually take. After that, we made the usual small talkโcommiserating about the job market, traffic, and the impending end of the summerโbefore she shared what she was really struggling with. Her mother, who was recovering from a stroke, had her rehab cut short after the insurer said she had โplateaued.โ The care team disagreed and as a result she spent the next few weeks appealing, emailing, and calling to get her care restored. Weeks later, her coverage was reinstated, but in that gap, she lost function sheโs still fighting to regain.
Unfortunately, her story isnโt an outlier. Across the country, insurers are embedding algorithms in prior authorization and claims review. While these tools claim to improve efficiency, their swift blanket denials often affect seniors, low-income patients, people of color, and the chronically ill first. Lawsuits against major mayors, new federal pilots, and increased guardrails create an opportunity to set the rules before code becomes policy by default.
Whatโs Actually Happening Behind the Scenes
Insurers are increasingly using automated tools for allocating care to predict the length of stays, flag โlow value services, and batch-screen claims against diagnoses codes. In practice, this means coverage decisions shift from your doctorโs clinical judgment to an opaque algorithmic score that patients will never see.ย ย
Two high-profile cases show how this scenario usually plays out. In Minnesota, families filed a lawsuit against UnitedHealth over allegations that its nH Predict tool systematically cut off post-acute rehab for Medicare Advantage patientsโclaims that federal court originally allowed to proceed through February of 2025 before rejecting the companyโs bid to limit discovery last week. Meanwhile in California, the court escalated a class action lawsuit against Cignaโs PxDx system for allegedly enabling batch denials without reviewing patient records. The tools may be different, but the pattern is clear: algorithms speed up denials, while patients are left to deal with the same slow, manual appeals process.ย
In response to this growing issue, regulators are pushing back. In February 2024, CMS drew a clear line stating that Medicare Advantage plans can use algorithmic tools to assist with coverage decisions, but licensed clinicians will have the final say in determining medical-necessity. Put simply, this means that software advises, but doctors decide. California took this a step further with SB 1120, which kicked in January 1, 2025, mandating human oversight and transparency whenever insurers use AI to influence coverage decisions. Recently, the state even doubled down in May with detailed compliance guidance.ย ย
However, the inclusion of AI in medical decisions is expanding beyond Medicare Advantage. Starting January 1, 2026, CMS will roll out its Wasteful and Inappropriate Service Reduction (WISeR) model in six states, leveraging AI and Machine Learning to โensure timely and appropriate Medicare payment for select items and services.โ While the pilot does include mandatory clinician review, this hybrid approach could go either wayโreducing administrative burden or perpetuating the same vague decision-making process that plagued Medicare Advantage. The success of this new iteration is entirely dependent on its implementation. In this case, responsible deployment would involve transparent disclosure when AI is used, detailed audit logs, and genuine recourse when algorithms deny legitimate care.ย
As these tools spread from Medicare to Advantage to Original Medicare and across insurance markets, their impact follows a familiar pattern of inequality and thereโs a clear trend of where theyโre deployed. They cluster in areas with narrow provider networks, among patients whose caregivers lack the bandwidth or language skills to appeal, and in communities cut off by digital divides. Efficiency is the sales pitch of AI, but vulnerability is the target when these algorithms underestimate recovery needs for Medicare seniors with complex conditions and trap Medicaid patients in endless prior authorizations loops. To make matters worse, these systems learn from historically biased data about who received rehab, imaging, or specialist referralsโessentially programming yesterday’s discrimination into tomorrow’s care decisions at massive scale. CMS’s new guardrails are a start, but without transparency requirements and real enforcement mechanisms, theyโre little more than suggestions.ย
So What Can You Do?
Fighting a faceless algorithm can feel impossible, but hereโs practical checklist that can tip the odds in your favor:
- Read the notice for tells: Look for phrasing like โnot medically necessary per criteria,โ โfails predictive model,โ or decisions issued within hours of submission. Ask (in writing) whether an algorithm assisted the decision and which policy or guideline was applied.
- Demand a human review: Under CMS guidance and laws like CA SB 1120, a licensed clinician must be accountable for coverage decisions. Request the physician reviewerโs credentials and a case-specific rationale that addresses your medical facts.
- Example language to use an appeal:
- โMy clinician attests this service is medically necessary based on [diagnosis/facts]. Please cite the exact Medicare/NCD/LCD or plan policy used and explain how my case was evaluated individually as required.โ
- โIf a software tool assisted the decision, identify it and confirm a licensed physician reviewed my file and made the final determination, per CMS.โ
- โI request an expedited review due to the risk of health deterioration.โ
- Example language to use an appeal:
- Be strategic about how you escalate: Ask for a peer-to-peer review. Log calls, save portal messages, and for Medicare beneficiaries, contact your State Health Insurance Assistance Program (SHIP) or legal aid. Note if the denial conflicts with Medicare coverage rules.
- Know Your Rights (Medicare Advantage): Plans can use algorithms to assist, but they canโt deny or terminate care without individualized medical-necessity review by clinicians, and must follow Medicare coverage criteria. If you suspect a batch or automated denial, call it out and request documentation.
Individual action helps, but families shouldnโt have to fight case by case. Hereโs what policymakers should do next:
-
- Mandate audit trails: Every AI-assisted coverage decision should generate a readable log: what tool/criteria were used, what data were considered, performance by subgroup, and which clinician signed off. CAโs SB 1120 and subsequent guidance are starting points; other states should follow.
- Publish denial/appeal/reversal rates (by line of business and ZIP code): If specific tools or vendors correlate with high reversal rates (i.e. many denials overturned), regulators and the public should know.
- Tie transparency to participation: For federal pilots like WISeR, require model documentation, bias testing, and patient-facing disclosures as conditions of entry. Publish independent evaluations early and often.
- Protect continuity of care: When algorithms trigger cuts to rehab days or therapies, require short grace periods while appeals are pending to prevent harm that canโt be undone.
- Invest in navigation, not just negation: If a system can flag โinappropriateโ services in milliseconds, it can also surface covered alternatives, community resources, and streamlined approvals for evidence-based care.
What Happens Next
We’re at an inflection point. The woman at the DMV that sparked this piece shouldn’t have had to spend weeks fighting for her mother’s rehab. Right now, while courts are still hearing cases and regulators are writing rules, we have a small window to ensure algorithms serve patients rather than spreadsheets. The question isn’t whether AI will reshape healthcareโit already has. The costs of getting this wrong extends farther than you may know. This is something I’ll explore in my next article, at the epicenter of America’s AI boom, where the true price of algorithmic healthcare is written in power bills and water usage.
Want the bigger picture on how AI could closeโor widenโhealth gaps? Read our explainer: AI in Healthcare: Equity, Disparities & Opportunities
Trending Topics
Features
- Drive Toolkit
Download and distribute powerful vaccination QI resources for your community.
- Health Champions
Sign up now to support health equity and sustainable health outcomes in your community.
- Cancer Early Detection
MCED tests use a simple blood draw to screen for many kinds of cancer at once.
- PR
FYHN is a bridge connecting health information providers to BIPOC communities in a trusted environment.
- Medicare
Discover an honest look at our Medicare system.
- Alliance for Representative Clinical Trials
ARC was launched to create a network of community clinicians to diversify and bring clinical trials to communities of color and other communities that have been underrepresented.
- Reducing Patient Risk
The single most important purpose of our healthcare system is to reduce patient risk for an acute event.
- Victor Mejia
- Subash Kafle
- Jessica Wilson