American healthcare doesn't have a knowledge problem. It has a connection problem.
Patient records live in one system, insurance coverage on another, clinical guidelines somewhere else, and trial registries in yet another silo. When a doctor needs to get a treatment approved, they're copying and pasting between screens, navigating phone trees, and filling out redundant forms. Doctors spend two hours on computer work for every one hour with patients. The system we've built actively blocks the care it's supposed to deliver.
Two of the world's largest AI companies announced their intent to fix this, not by building better healthcare products, but by becoming the connective tissue between everything else. OpenAI launched ChatGPT Health and ChatGPT for Healthcare. Days later, Anthropic unveiled Claude for Healthcare at the JP Morgan Healthcare Conference.
This isn't about chatbots anymore. It's about infrastructure.
The shift enabling AI's entrance into healthcare isn't smarter models. It's a new way for AI to connect to existing systems.
Anthropic's Model Context Protocol (MCP) is the clearest example. MCP lets Claude query databases, Medicare coverage rules, diagnosis codes, clinical trial registries, directly. The AI becomes a query layer on top of it, keeping data secure and audit trails intact.
There's already a standard called FHIR that defines how healthcare data should be formatted, like agreeing on a common language for patient records. But FHIR just defines data structure. MCP defines how AI talks to that data. One is the noun, the other is the verb.
The capability improvements are real but limited. Claude Opus 4.5 achieves 92.3% accuracy on medical calculations; a big jump from earlier models that couldn't do basic math reliably. But 92% means roughly one error in thirteen calculations. Fine for drafting a prior authorization appeal. Definitely not fine for medication dosing.
"On complex multi-step medical tasks, Claude succeeds 61.3% of the time. These tools are powerful drafting assistants. They are not autonomous agents."
A December 2025 study from Penn found that AI models give different answers when asked the same clinical question multiple times. These tools are powerful drafting assistants. They are not autonomous agents.
OpenAI and Anthropic are chasing the same prize through different doors.
OpenAI is going consumer-first. ChatGPT Health lets users connect medical records and wellness apps, Apple Health, MyFitnessPal, lab results, to get personalized health answers. Over 230 million people already ask ChatGPT health questions weekly. Through a partnership with b.well, which pulls records from 50,000+ healthcare providers, OpenAI is positioning ChatGPT as the front door to personal health, outside any single hospital's control.
On the enterprise side, ChatGPT for Healthcare is rolling out to Boston Children's Hospital, Cedars-Sinai, Memorial Sloan Kettering, and Stanford Medicine. The focus: clinical documentation, discharge summaries, and admin workflows.
Anthropic is going enterprise-first. Claude for Healthcare plugs directly into the databases that providers and insurers actually use: CMS coverage rules, ICD-10 codes, the National Provider Identifier registry, PubMed. It's built for the expensive, unglamorous work: prior authorization reviews, claims appeals, care coordination. Claude for Life Sciences extends this to drug development, connecting to Medidata, ClinicalTrials.gov, bioRxiv, and ChEMBL.
Novo Nordisk cut clinical documentation time from over ten weeks to ten minutes, with 50% fewer review cycles.
Banner Health deployed Claude to 55,000 employees across their 22,000-provider network.
Sanofi, AstraZeneca, and Genmab are using Claude for regulatory filings, drug discovery, and clinical development.
The application-layer startups are scaling fast too. Abridge (which listens to doctor-patient conversations and writes the notes) raised $300 million at a $5.3 billion valuation. Ambience Healthcare raised $243 million at $1.25 billion. Cohere Health raised $90 million to automate prior authorization from the payer side.
The big question for investors: where does value go when AI becomes healthcare's connective tissue?
Three outcomes are possible:
The AI companies win (OpenAI, Anthropic) by taxing every transaction that flows through their infrastructure—the "AWS of healthcare intelligence" model.
The vertical startups win (Abridge, Ambience, Cohere) by owning specific workflows and customer trust, using AI as a commodity input.
The incumbents win (Epic, Veeva, Medidata) by adding AI to their existing platforms and keeping customers locked in.
The market is massive. Healthcare AI is projected to reach $505 billion by 2033. The CRO market (companies that run clinical trials) is $80-92 billion and could compress fast if AI cuts documentation timelines by 90%.
A new category is emerging: "MCP-native" startups that build specialized connectors for the AI ecosystem. These companies sell context as a service; feeding models the domain-specific data they need to work in regulated environments.
"The key tension: if Claude can do prior authorization review out of the box, does a startup like Cohere Health lose its edge?"
Most investors say no. Workflow integration, customer trust, and regulatory know-how are still defensible. The foundation models provide the reasoning engine. Startups win by owning the last mile.
The human cost of the current system is staggering.
The average doctor completes 39 prior authorizations per week; that's about 13 hours of physician and staff time. Ninety-three percent of doctors say this delays necessary care. Eighty-two percent say patients give up on treatment because of it. Eighty-nine percent say it's a major driver of burnout.
AI promises to flip this: machines handle the paperwork, humans handle patients.
But the risks are real.
⚠ Documented Failures
The FDA's internal AI tool, powered by Claude, was caught making up fake studies during regulatory reviews. UnitedHealthcare is facing a class-action lawsuit claiming their AI wrongfully denied care to elderly patients; with a 90% error rate when those decisions were appealed. The lawsuit says UnitedHealthcare used AI to "batch deny" claims without real human review.
There's also an equity problem. The best-funded hospitals, Cedars-Sinai, Stanford, Mayo Clinic, are adopting these tools first. Safety-net hospitals serving low-income patients may get left behind.
The risk
2-tier
Rich hospitals with AI help. Poor hospitals drowning in paperwork.
The efficiency gains are clear and obvious, but the guardrails aren't.
Surprisingly, regulation is pushing adoption forward.
A CMS rule that took effect this month (Jan’26) requires insurers to use modern APIs for prior authorization and respond to denials within seven days. Insurers can't hit those timelines with humans alone. The rule is creating guaranteed demand for exactly what OpenAI and Anthropic are selling.
The FDA loosened its stance the same month. It signaled it won't heavily regulate AI tools that give single recommendations with a human checking the output. This clears the way for more aggressive deployment in clinical decision support.
But a protocol war is brewing.
MCP is Anthropic's open standard for connecting AI to other systems. OpenAI isn't adopting it; they're building their own approach. If healthcare settles on one protocol, whoever owns it gains enormous leverage. This fight is just starting.
"The deeper concern: automation bias. When AI is right 90% of the time, humans stop checking closely and miss the 10% where it's wrong."
That 10% could prove to be catastrophic. Healthcare hasn't figured out how to design workflows for this yet.
The Bottom Line
OpenAI and Anthropic aren't building healthcare products. They're building the layer that connects healthcare products to each other. Whoever owns that layer taxes every transaction without owning clinical risk.
For investors, the signal is clear: value will flow to infrastructure providers who enable the movement of intelligence, and to deeply integrated vertical platforms that turn that intelligence into trusted workflows. The middle ground, generic healthcare chatbots, will get squeezed out.
The $5.3 trillion American healthcare system is built for processing transactions, not delivering care. AI won't fix that misalignment. But it might finally make the connections that let doctors do what they were trained to do: take care of patients.
If this was useful, forward it to someone building in biotech. More stories like this every week at Thinking Folds.