Prohibited Practices under the EU AI Act
What Clinicians and Health Tech Developers Need to Know
The EU AI Act is now live, and Article 5 outlines a set of outright prohibitions - use cases for AI that are banned across the board. The penalties are steep: up to 7% of global annual turnover for companies that break the law. But what exactly is banned, and how does this affect those of us working in health and care?
Here’s a breakdown of what’s clear - and what remains murky.
✅ What’s Clearly Banned
Manipulative AI
Systems that exploit psychological tricks - like subliminal messages or behavioural nudges - to push people into decisions that harm them are prohibited. For example, an app nudging a vulnerable patient into risky treatment choices through fear-based design could be illegal.Exploitation of Vulnerability
AI targeting people based on age, disability, or socio-economic status in ways that manipulate or deceive them is banned. That includes health tools aimed at older adults that exploit confusion or fear to drive behaviour.Social Scoring
Using AI to rank people based on behaviour, health conditions, or other traits -and then using that ranking to limit services or opportunities - is prohibited. This carries important implications for health insurers, triage systems, and access to care.Predictive Policing or Profiling
Systems that predict criminal behaviour based solely on profiling are banned. In health, that means caution around predictive AI in psychiatric or addiction services, especially if based on opaque algorithms or flawed data sets.Untargeted Facial Scraping
Harvesting facial images from the internet or public cameras to build biometric databases is outlawed. This includes scraping hospital CCTV or social media photos for training AI models.Emotion Recognition in Work or School
AI that tries to infer emotional states - say, stress, fatigue, or motivation - in students or employees is banned. In healthcare, this could hit systems used in training environments or staff wellness apps.Biometric Inference of Sensitive Traits
Using biometric data to infer race, religion, sexuality, political views, or other sensitive characteristics is prohibited. It is not permitted then to deploy “emotion” or “risk” detection tools based on facial analysis in clinical trials or digital therapeutics.Facial Recognition in Public by Police
Real-time facial recognition in public spaces by law enforcement is mostly banned - though member states can opt in under strict conditions.
Here’s the reality: the prohibitions look strict on paper, but how they’ll be enforced is another matter.
❓What’s Still Unclear
Who is liable?
While both AI developers and users (including hospitals or clinics) may be held accountable, the exact allocation of responsibility is still fuzzy - especially for integrated systems.What counts as “significant harm”?
The bar for what constitutes manipulation or vulnerability isn’t always obvious. There’s no bright red line, and national regulators may interpret this differently.Contractual grey zones
Provider–deployer contracts will need to be carefully drafted. Again, who is on the hook for ensuring compliance - especially when multiple actors are involved?
🔍 Interpretation and Enforcement: The Real Test
Here’s the reality: the prohibitions look strict on paper, but how they’ll be enforced is another matter.
Patchwork enforcement is likely.
While the AI Act is EU-wide, it will be enforced by national authorities in each member state. This creates room for inconsistency: some regulators may move quickly and forcefully, while others will be under-resourced or cautious.
Health systems are especially complex.
In a hospital setting, it’s not always obvious who “places” or “puts into use” an AI system. Is it the vendor? The IT department? The clinical lead? In practice, it may take litigation or precedent-setting decisions to clarify this.
What if a system’s behaviour changes over time?
Many AI tools “evolve” in use - especially those connected to large language models or real-world data streams. If a product was compliant when installed but becomes manipulative or discriminatory as it adapts, who’s watching, and how vigilant will they be?
Regulators can’t inspect every app or model.
Just like with GDPR, expect selective enforcement that focuses on high-profile, high-risk cases. In healthcare, even small-scale tools that leak information can have major consequences.
Healthcare faces extra scrutiny.
Because of the potential for harm, and the sensitivity of patient data, AI tools in health and care are likely to be a priority area for early enforcement. Commercial entities may be taking a risk if they assume they’ll be overlooked - then again, see the point about enforcement, above.
⚕️ Bottom Line for Healthcare
The AI Act isn’t just about tech, it policy pivots around ethics, power, and patient protection. These prohibitions should give pause to any healthcare team considering opaque risk models, or surveillance-style biometric tools.
The prohibitions in the AI Act are bold - but enforcement is the weak link. As with GDPR, the real challenge isn’t the rules on paper, but the capacity and appetite to police them. National regulators will be stretched, and AI in healthcare is messy: it involves multiple actors and agencies, high-stakes, and technical complexities.
Expect uneven application. Some countries will come down hard; others may look the other way. And in the meantime, industry will test the grey zones. For healthcare, this creates a dangerous mix: strict laws, unclear lines, and slow-moving enforcement which carries continued risks for patients.
If we’re serious about protecting patients from harm and manipulation, we’ll need more than legal texts - we will need watchdogs with teeth, real-world testing, and a regulatory culture that understands clinical nuance. Until then, much of the burden will fall on conscientious developers and clinicians to draw their own ethical red lines. We will follow up in future posts on how well clinicians are doing in developing their own AI ethics guidelines, and doctors are doing in adhering to them. Please do stay tuned.
Charlotte Blease wants smarter healthcare for patients.
Read more in her forthcoming book Dr Bot: Why Doctors Can Fail Us and How AI Could Save Lives (Yale University Press, September 2025).