In case you missed it, Utah announced something no other state has done: an AI system that can legally prescribe medications.
Doctronic's AI can now renew prescriptions for chronic conditions without a human doctor reviewing the decision. Blood pressure meds, cholesterol pills, birth control, antidepressants. The AI talks to the patient, checks for drug interactions, sends the prescription to the pharmacy. A doctor's signature shows up on the prescription, but that doctor never looked at the patient's file.
This isn't just better software. It's giving prescribing power to a computer program instead of a human with a medical license. And it only works because Utah created a special legal program that temporarily ignores normal licensing rules.
The company raised $25 million calling this "the first AI doctor in US history to legally prescribe routine medication refills." But that claim hides important limits: only refills (not new prescriptions), only certain safe medications, only in Utah, and only for 12 months unless safety data proves it works.
The real question isn't whether Doctronic's AI works. It's whether or not this experiment in one state can become the model for how AI works in healthcare.
When people hear "AI doctor," they picture ChatGPT wearing a stethoscope. That's not what Doctronic built.
Regular AI systems like ChatGPT predict what word should come next. Ask GPT-4 about a medical problem and it writes something that sounds smart. But sounding smart isn't the same as being right. These systems make confident mistakes because they're trying to sound good, not be accurate.
Doctronic works differently. Instead of one AI making decisions, they use over 100 specialized AI programs working together. One AI asks the patient questions. Other AIs trained in heart health, drug safety, and hormone treatment analyze the answers. A "critic" AI looks for dangerous drug combinations and warning signs. A final AI checks if all the other AIs agree before approving the prescription.
This copies how teaching hospitals work. When doctors discuss complicated cases in "Grand Rounds," a team debates what to do so no single doctor's mistake gets through. And this one will be a lot more accurate than what Grey’s Anatomy shows. If the drug safety AI says there's a problem but the heart health AI says renew the prescription, the system flags it for a human to check.
The company published a study to prove this works: 500 urgent care cases, with the AI agreeing with real doctors 81% of the time on diagnosis and 99.2% on treatment. But its important to note one thing about this study: The people who wrote it own stock in Doctronic. They looked backward at old cases instead of testing new ones. And they tested urgent care visits, not the chronic disease refills that Utah actually approved.

The safety system works in stages. Real doctors check the first 250 prescriptions for each type of medication. After that, the AI does it alone but doctors randomly review 10% to make sure it's still safe.
Here's what "autonomous" really means: the AI talks to the patient, looks at their medication history, checks for problems, and sends the prescription. A doctor's signature appears for insurance companies, but that doctor didn't actually review anything.
This is completely different from companies like K Health, Hims, or Ro. Those companies use AI to gather information, but a licensed doctor always makes the final decision to prescribe. In those systems, AI is the helper. In Utah, AI is the decision maker.
Doctronic was started in 2023 by two very different people. Matt Pavelle used to build technology for a luxury fashion website called Moda Operandi. He knows how to make apps that customers love using. Dr. Adam Oskowitz is a surgeon at UCSF who still sees patients. He gives the company medical credibility and helps defend against people who say they're being careless.
The team raised $5 million from Union Square Ventures in May 2025, then $20 million from Lightspeed Venture Partners in September 2025. That's $25 million total. Famous investors include Dr. Fei-Fei Li (a Stanford AI expert), Jay Desai (who started PatientPing), and Scott Belsky (who works at Adobe).
Doctronic's traction
21.9M
Medical conversations | 1M+ users | 50K weekly visits
The business model gives away something free to get customers. Millions of people use the free AI chat to ask medical questions. Some of them pay $39 for a video visit with a real doctor when the AI can't help or isn't legally allowed to. In Utah, automated refills cost about $4, regular income that barely costs anything to provide.
The money side matters. Regular telehealth companies pay doctors to spend time asking "What's wrong?" Doctronic pays for computer power to ask that question. Because the AI does the question-asking and note-taking, human doctors can finish appointments in 2-3 minutes instead of 10-15 minutes. This means they make more profit than Teladoc or Amazon Clinic, which still depend on paying humans for their time.
The free AI chat is the hook. By giving people useful information for free (a medical assessment), Doctronic gets customers without paying for ads. They make money when the AI needs to send someone to a human doctor or processes a paid refill.
Lightspeed invests in companies that make healthcare's first interaction digital and instant. They previously invested in Abridge (AI that writes doctor's notes, worth $5.3 billion) and helped fund Anthropic's $13 billion funding round.
"Doctronic has unique AI architecture that combines AI intelligence with real medical oversight."
— Faraz Fatemi, Partner, LSVP
But Doctronic's real advantage isn't their technology. Other companies could build similar AI systems. K Health's AI agreed with doctors two-thirds of the time in a 2025 study. The difference is legal: Doctronic is allowed to prescribe on its own. Others aren't.
This advantage only exists because Utah made an exception. Utah's 2024 Artificial Intelligence Policy Act created an Office of Artificial Intelligence Policy that can give out "regulatory waivers"; basically temporary permission to ignore normal licensing rules if you agree to extra safety monitoring.
Doctronic is running a 12-month test program. No FDA approval. No federal permission. A state experiment that could end if the safety data doesn't look good.
The company says it bought malpractice insurance that specifically covers its AI system, the first insurance like this. This is important financial protection. If the AI is insured as its own thing, it protects the founders and Utah from being sued directly. But this type of insurance has never been tested in court. No one knows what happens when an AI's prescription hurts someone.
The American Medical Association opposes letting AI prescribe on its own. AMA VP John Whyte responded to Doctronic's announcement: "While AI has huge potential to improve medicine, without doctor input it creates serious risks for patients and doctors."
If Congress passes the Healthy Technology Act (H.R. 238), which would let AI prescribe if a state approves it and the FDA clears it, competitors could quickly copy this model. The bill went to committee in January 2025. Similar bills failed in 2021 and 2023.
In recent history, the story of Babylon Health serves as a warning. The UK company was worth $4.2 billion after claiming its AI could diagnose as well as doctors. Then it went bankrupt in 2023. Analysis showed its AI was just a simple decision tree, it missed serious conditions, it lost money on every patient, and it attacked regulators who questioned it instead of fixing problems.
Doctronic's narrow focus (only refills, only 190 medications, lots of exclusions) is intentionally different from Babylon's overreach. But the pattern of big claims before independent proof is similar.
Being a doctor isn't one single job. It's many tasks: diagnosing new problems, caring for patients emotionally, doing procedures, and handling paperwork. Doctronic proves that paperwork (routine refills) can be separated and automated.
Primary care doesn't have enough doctors. By 2036, we'll be short 17,800 to 48,000 doctors. Doctors currently spend 30 minutes every day processing refill requests—13 hours a week when you count staff time for insurance approvals. That's time that could go to actually helping patients.
If Utah's test works, we'll see "synthetic provider groups", small teams of humans working with thousands of AI assistants to care for hundreds of thousands of patients. This makes healthcare much cheaper but completely changes how doctors and patients interact.
The danger is real. AI can't see subtle warning signs that show up in a physical exam. Patients might not mention new medications, allergies, or symptoms that make a prescription unsafe. At Doctronic's size, even a 0.8% mistake rate (the opposite of their claimed 99.2% accuracy) could mean hundreds of dangerous prescriptions every year.
The first serious problem, whether that be a missed drug interaction, a delayed cancer diagnosis, a patient death caused by an AI decision, will determine how all AI prescribing gets regulated. And it’s pretty admirable that Doctronic volunteered to be the test case.
Utah's 12-month test ends in early 2027. Three things could happen.
Success and growth: The safety data is good. Other states (Arizona, Texas, and Missouri are reportedly interested) approve similar programs. Congress passes the Healthy Technology Act. Doctronic becomes the leader in AI prescribing. The $25 million they raised gives them 18-24 months to prove it works.
Limited success: The test shows it's safe but not amazing. Other states wait. Congress doesn't pass new laws. Doctronic stays small in Utah and focuses on their doctor-assisted model everywhere else. The AI prescribing experiment becomes a marketing story instead of their main business.
Failure after a serious problem: A bad event shuts down Utah's test, brings Congressional attention, triggers FDA action, and causes lawsuits. The Babylon story repeats. This is especially likely because Doctronic is the first: any problem with AI prescribing anywhere becomes their problem.
The most valuable thing Doctronic might be building isn't the AI, which other companies can copy, but the safety data. By getting malpractice insurance for the AI and collecting safety information, they're building proof of how safe it is. If they can show the AI is safer than rushed human doctors who approve refills without really looking, their insurance gets cheaper, giving them a cost advantage no human-only clinic can beat.
Doctronic has done something no other company has: gotten legal permission for AI to prescribe in the United States. This isn't just marketing. Utah's program is real, it's working, prescriptions are being filled.
But it's fragile. The legal permission is temporary and only in one state, not federal approval. The safety study was done by the company itself. The largest doctor organization opposes it. The liability rules haven't been tested in court.
This story isn't pure breakthrough or pure hype. It's a calculated bet on finding a legal loophole that could either set the standard for AI prescribing in American healthcare—or become the warning story that prevents it.
I'm opening up a few slots to work with founders and investors in biotech who want to focus on their narrative-ghostwriting, thought leadership, content strategy. If this interests you, reply to this email.
If this was useful, forward it to someone building in biotech. More stories like this every week at Thinking Folds.