
In the span of two weeks this January, Anthropic, OpenAI, and Amazon all announced healthcare AI platforms1. The headlines focused on clinical decision support and diagnostic tools. What caught my attention was different. They are targeting prior authorization. The access decision. Anthropic’s announcement specifically mentioned reducing review times “from days to minutes.” OpenAI acquired a startup to build what they call “unified medical memory.”
The announcements are ambitious. The partnerships with major health systems are real. But here is the question no one is asking loudly enough: under what regulatory framework? With what oversight? The governance problems I have spent my career navigating are now playing out at foundation model scale. The algorithms are new. The fundamental tension is not.
Regulators are paying attention. Colorado’s AI Act was supposed to take effect February 1 but got pushed to June 30 after tech lobbying and legislative deadlock. That five-month delay tells you something about the asymmetry: the technology ships in weeks, the governance lags by years.
I did not choose healthcare because it was easy or lucrative. I chose it because it matters. At Bayer, I saw how pharmaceutical innovation could transform outcomes for patients with conditions that had few options. At Abbott, the mission was “Life to the Fullest,” helping people reclaim what illness takes away. Both resonated with something I had felt long before I could articulate it: that technology should help people live better, not just exist as an impressive demo. Health for all. Not health for those who can navigate the system, or health for those whose claims get approved, but health as a baseline expectation of human dignity.
But here is what I learned across both worlds: clinical excellence is not the hard part. Getting technology to the people who need it is. You can build something that works beautifully in validation studies, earn FDA clearance or prove efficacy in trials, get physicians excited, publish in top journals, and still fail to reach patients because someone downstream decided it was not covered or not on formulary. We have seen this story repeat: orphan drugs that transform outcomes for rare diseases but remain inaccessible because of cost and coverage barriers, off-label uses that oncologists know work but insurers will not approve, continuous glucose monitors that took years to move from “innovative” to “covered” while diabetic patients waited. The clearance certificate hangs on the wall. The Phase III results sit in archives. The patient never knows they existed.
Innovation without access is just a promise.
In pharma, I learned this lesson early. You could prove efficacy in trials. You could publish in top journals. But formulary decisions determined reach, and those decisions happened in rooms we were not invited to, based on criteria we could not always see. Someone else, not the maker, not the clinician, not even the patient, decided whether innovation became access.
In the device world, the lesson became even more explicit. The payer question was in every product meeting before we had a working prototype. Not because we wanted it there, but because we had learned the hard way that ignoring it meant building something that would never reach the people it was designed to help. We designed with reimbursement in mind from day one. We studied coverage policies alongside clinical evidence. We anticipated objections that would come not from regulators or physicians but from utilization management committees we would never meet.
That “someone else” is increasingly an algorithm. And now, increasingly, that algorithm is regulated.
What Payer AI Governance Actually Requires
When I think about what payer AI governance actually requires, I keep coming back to the same fundamentals I learned in the device world. The vocabulary changes, but the questions do not. What do we have? You cannot govern AI systems you do not know exist. Before anything else, you need an inventory, a clear accounting of every algorithm making or influencing decisions about care. Which ones matter most? Not all AI carries equal risk. A chatbot answering general questions is not the same as a model deciding whether to approve a cancer treatment. Risk classification is not bureaucracy; it is triage. Can we trace decisions? When a member appeals a denial, when a regulator asks for documentation, when Colorado requires impact assessments by June, can you reconstruct why the algorithm said no? Documentation is not optional anymore. Is it still working? Is it still fair? Models drift. Populations shift. What passed fairness testing last year might fail this year as the data underneath it changes. Monitoring is not a one-time event; it is a continuous discipline.
This is not unfamiliar territory. In device and pharma companies, health economics and outcomes research teams exist specifically to build the evidence that payers require. I have worked alongside these groups, seen how real-world evidence studies are designed not just for scientific publication but for coverage dossiers, watched the back and forth between clinical development and reimbursement strategy. The payer perspective was never something I learned about secondhand. It was in the room, shaping what we built and how we built it.
The mental shift for AI governance is subtle but important. In clinical AI, you validate against ground truth. Did the algorithm get the diagnosis right? In payer AI, the “ground truth” is often historical decisions, and those decisions carry their own biases. You are not just asking whether the model is accurate. You are asking whether it is fair, and fair to whom, measured how, compared to what baseline. That is a harder question, but it is the question that matters when the output is “yes, you get care” or “no, you do not.”
The Regulatory Patchwork
The regulatory landscape adds another layer of complexity. FDA gave us a single framework, one set of rules, one submission pathway. Payer AI governance gives us Colorado with its AI Act effective June 302, Texas with TRAIGA already live since January 13, the NAIC Model Bulletin adopted by 24 states and counting4, CMS guidance for Medicare Advantage5, and more coming. You cannot build one perfect framework and deploy it everywhere. You have to build architecture that adapts to a patchwork, that handles the inevitable cases where different states want different things. It is a different kind of entropy than clinical chaos, but it requires the same discipline: channel it rather than pretend it does not exist.
The Convergence
The convergence is striking. Big tech is racing to automate access decisions at exactly the moment regulators are building guardrails for those decisions. The UnitedHealth lawsuit showed what happens when AI driven denials go wrong at scale6, when families discover that an algorithm called nH Predict, not a physician, determined that their loved one’s care should end. That case is still advancing through federal court, with discovery now proceeding after the judge rejected attempts to limit its scope. It is not an abstraction. It is a signal. The regulatory wave is not theoretical. It is a response to real harm, to real people who were told no by systems they could not see or appeal in any meaningful way.
We can build the most elegant AI, clear every regulatory hurdle, publish in the best journals, and still fail if the last mile does not work. The algorithm that decides coverage is now part of that last mile. And as these systems scale, the question is no longer just whether they are accurate. It is whether they are fair. Whether they are transparent. Whether the people affected by them have any meaningful way to understand or appeal the decision.
The regulatory deadlines are real. The technology is moving fast. The stakes, as always, are access. “Life to the fullest” only happens when innovation actually reaches the person who needs it.
Footnotes
-
In January 2026, OpenAI launched ChatGPT Health (January 7), Anthropic unveiled Claude for Healthcare (January 11), and Amazon released Health AI for One Medical members (January 21). See: OpenAI. (2026). “Introducing ChatGPT Health.” https://openai.com/index/introducing-chatgpt-health/; TechCrunch. (2026). “Anthropic announces Claude for Healthcare.” https://techcrunch.com/2026/01/12/anthropic-announces-claude-for-healthcare-following-openais-chatgpt-health-reveal/; Healthcare Dive. (2026). “Amazon launches health AI chatbot for One Medical members.” https://www.healthcaredive.com/news/amazon-one-medical-health-ai-assistant-chatbot/810235/ ↩
-
Colorado General Assembly. (2024). “SB24-205: Consumer Protections for Artificial Intelligence.” https://leg.colorado.gov/bills/sb24-205 Colorado’s comprehensive AI governance law requiring impact assessments for high-risk AI systems, effective June 30, 2026. ↩
-
Texas Legislature. (2025). “HB1709: Texas Responsible AI Governance Act (TRAIGA).” https://capitol.texas.gov/BillLookup/History.aspx?Bill=HB1709&LegSess=89R Texas AI governance framework establishing requirements for deployers of high-risk AI systems, effective January 1, 2025. ↩
-
National Association of Insurance Commissioners. (2023). “Model Bulletin: Use of Artificial Intelligence Systems by Insurers.” https://content.naic.org/article/naic-members-approve-model-bulletin-use-ai-insurers Guidance on AI governance for insurance companies, now adopted by 24 states. ↩
-
Centers for Medicare & Medicaid Services. (2024). “Contract Year 2026 Policy and Technical Changes to the Medicare Advantage Program.” https://www.cms.gov/newsroom/fact-sheets/contract-year-2026-policy-and-technical-changes-medicare-advantage-program-medicare-prescription Federal guidance on AI use in Medicare Advantage coverage decisions. ↩
-
STAT News. (2025). “Lawsuit against UnitedHealth over AI-driven care denials moves forward.” https://www.statnews.com/2025/02/13/lawsuit-unitedhealth-artificial-intelligence-care-denials-medicare-advantage-moves-forward/ Federal lawsuit alleging UnitedHealth used the nH Predict algorithm to systematically deny care to Medicare Advantage patients. ↩