The Future of Empathy in the Age of Dr Bot
Latest research asking what AI might do for human aspects of care.
Empathy has become medicine’s most overused and least believed word. In corridors where exhaustion is routine and metrics replace meaning, talk of compassion can sound almost cynical. Many clinicians still value empathy deeply — but quietly, even defensively — aware that the system rewards efficiency, not feeling.
In Dr Bot, I explore how this erosion happened — and what it means as AI enters the clinic. Machines may never feel, but their rise forces us to ask a harder question: when did empathy start to sound unrealistic?
In this dispatch, I look at three recent papers for your attention.
Image by Freepik
When the Bots Sound Kinder Than the Doctors
This new British Medical Bulletin meta-analysis (Howcroft et al., 2025) delivers a jolt to clinical self-image: in 13 of 15 studies, patients rated AI chatbots - mainly GPT-4 - as more empathic than human healthcare professionals. Across more than 6,000 text-based interactions, machine-generated replies scored higher for warmth, understanding, and compassion, roughly equivalent to a two-point boost on a ten-point empathy scale. Dermatology was the lone holdout where humans fared better. The authors caution that most studies used proxy raters, and unvalidated empathy measures, yet the pattern is clear: in the stripped-down world of digital communication, clinicians are being outperformed at their own bedside art. The result is unsettling not because machines now “feel,” but because their scripted civility exposes the routinised and threadbare reality of human empathy in medicine.
When Machines Read Faces as Well as We Do
This new npj Digital Medicine paper (Nelson et al., 2025) tests three multimodal LLMs — GPT-4o, Gemini 2.0 Experimental, and Claude 3.5 Sonnet — on their ability to identify human facial emotions. GPT-4o and Gemini performed on par with human raters, and in some categories - “calm/neutral” and “surprise” - slightly better. Accuracy across all expressions reached 86% for GPT-4o and 84% for Gemini, with “almost perfect” agreement to human ground truth. Only fear consistently tripped the models up. The results hint at an unsettling symmetry: machines are now reading emotional cues with roughly the same reliability as people. If empathy begins with recognition, these findings suggest the boundary between social perception and simulation is already blurring. And remember, the AI is only getting better.
Same Words, Less Warmth
Another paper disrupts the narrative, however. A new publication in Nature Human Behaviour, Comparing the value of perceived human versus AI-generated empathy (Rubin et al., 2025), delivers a fascinating insight into our emotional hierarchies. Across nine studies (n = 6,282), identical empathic responses were rated as more supportive, caring, and authentic when participants believed they came from a human rather than an AI. Even when AI offered the same words, timing, and tone, disclosure of its involvement seemed to drain the encounter of warmth. The difference, the authors reported, lies in the “feeling with” and “caring for” dimensions of empathy - gestures of emotional effort we still reserve for people. The findings expose an awkward truth: we still appear to prize empathy as a kind of emotional labour - as something that costs time, energy, and attention. When care becomes effortless via AI, we may suspend our belief in its authenticity.
Charlotte Blease wants smarter healthcare for patients.
Author of Dr Bot: Why Doctors Can Fail Us and How AI Could Save Lives (Yale University Press, 2025)



Thanks David. Yes it's less anti human and AI as what we might mean by future professional roles as we know them.
How ironic. How dystopian. Humans have given up on humanity of another human being and are seeking love connections with a man made product