We Are Already Using Chatbots in Healthcare.
Regulation Pretends Otherwise
The debate about generative AI in healthcare has acquired a slightly surreal, Monty Python-esque quality. Policymakers continue discussing these tools as though they are approaching steadily and visibly over the horizon. Meanwhile, doctors are already using ChatGPT to draft letters, summarise notes, and even to assist with core clinical skills like diagnoses, and suggesting treatments. Patients are using it too — often before seeing a clinician at all, or in deciding on whether to bother with one.
The technology has arrived.
What is interesting is how differently Europe and the United States anticipated this ecosystem and how they are responding.
The European Health Data Space (EHDS) reflects a distinctly continental instinct: regulate first, and do the right thing. The emphasis is on governance, consent, oversight, interoperability, and public trust. The underlying assumption is that health data are too sensitive to become another playground for unrestrained commercial extraction.
There is a lot to admire here. Health data are not shopping preferences. Most people are understandably uneasy about large technology companies harvesting their intimate clinical information.
But there is also a risk that Europe regulates generative AI as though it were still theoretical. It is not. Clinicians are already using these systems because healthcare is overstretched, documentation is exhausting, and existing health infrastructure is frequently dreadful.
The United States has taken a different path. The American system is faster, looser, and - as usual - leaves everything to the markets and lawyers (via inevitable litigation) to decide. Interoperability rules under the 21st Century Cures Act have helped data move more freely, but there is no equivalent to a central, trusted European-style health data space. Instead, there is data decentralization: a patchwork of hospitals, insurers, vendors, startups, and technology companies all improvising and experimenting simultaneously.
This produces an exciting level of innovation while it also produces a lot of chaos. Are these systems helping patients? Worsening inequalities? Increasing trust? Amplifying anxiety? Improving care? Producing subtle harms nobody anticipated?
Neither system has quite solved the central problem: how to encourage useful and rapid innovation without building a machinery of digital exploitation around citizens and patients.
We need a greater injection of realism. Generative AI is already embedded in healthcare. The question is no longer whether we permit it but what to do about it - namely, governing what we’ve got not what we’d like to have.
That means borrowing from both sides: European-style public safeguards alongside American-style interoperability. If we really care about patients, we need much stronger penalties for exploitative uses of patient data, and less obsession with the procedural insistence on box-ticking. In other words, we need a keener attention to outcomes in the real world, and the teeth to meaningfully punish Big Tech’s excesses.
Charlotte Blease wants smarter healthcare for patients.
Author of Dr Bot: Why Doctors Can Fail Us and How AI Could Save Lives (Yale University Press, 2025)


