Big Tech is racing to put AI doctors in your pocket, but nobody's asking the uncomfortable question: do these things actually work?

The Summary

  • Microsoft launched Copilot Health and Amazon expanded Health AI beyond One Medical, flooding consumer markets with LLM-based medical advisors
  • Both companies are betting people will trust AI with health decisions before anyone proves these tools improve outcomes
  • The real story isn't the launch, it's the validation gap: we're deploying medical AI faster than we're testing it

The Signal

Microsoft and Amazon just made the same bet within 48 hours of each other. They're wagering that consumers want AI health tools badly enough that efficacy can come later. Copilot Health will ingest your medical records and answer questions. Health AI does roughly the same thing. Both use LLMs. Neither has published peer-reviewed evidence showing they improve health outcomes, reduce misdiagnosis, or help people make better medical decisions.

This is the pattern now. Deploy first, validate later. Maybe. The FDA doesn't regulate "wellness" tools, which is the category these products hide in. They're not diagnostic devices. They're assistants. Helpers. Just asking questions on your behalf. Except people will use them to decide whether that chest pain is serious or if they should take their kid to the ER.

The technology exists in a strange gap. Too medical to ignore. Too consumer-facing to regulate effectively. MIT Technology Review points out we have more AI health tools than ever, but almost no framework for measuring whether they actually help. LLMs hallucinate. They're confident when wrong. In coding, that's annoying. In medicine, that's dangerous.

The timing tells you everything. Microsoft and Amazon aren't coordinating, they're competing. First mover advantage in a market where nobody's defined what "good" looks like yet. Lock in users now, figure out accuracy later. It's the opposite of how medical devices get approved, but it's exactly how software companies operate.

The Implication

If you're building health tech, understand this moment. The regulatory window is open because these tools aren't quite medical devices yet. That window will close the first time someone gets hurt following AI health advice and sues. Until then, it's land grab time. For users, the advice is simpler: use these tools for research, not decisions. The AI doesn't know what it doesn't know, and it can't tell the difference between confident and correct.


Source: MIT Technology Review AI