GDPR, the EU AI Act, HIPAA-equivalent frameworks — the instinct is to hand these to legal. That's the wrong move. Compliance in healthcare AI is decided in the architecture, not the contract.
This matters because the legal team can review a contract. They cannot review a data pipeline. They cannot assess whether pseudonymisation has been implemented correctly, whether audit logs are sufficiently granular, or whether model outputs are traceable back to their training data. Those are engineering questions with compliance consequences.
What the AI Act means in practice
The EU AI Act classifies healthcare AI systems as high-risk. This is not a bureaucratic label — it has specific technical obligations. High-risk systems must maintain technical documentation sufficient to demonstrate conformity. They must have human oversight mechanisms. They must be designed to allow for meaningful audit.
Each of these requirements is an architecture decision.
Technical documentation is not a PDF produced after the fact. It is the sum of architectural decisions made during design: the data sources used, the model selection rationale, the evaluation methodology, the integration design. If those decisions weren't documented as they were made, creating documentation later is reconstruction — and reconstruction rarely survives scrutiny.
Human oversight is not a checkbox on a risk form. It is a system design that ensures a clinician, administrator, or operator can understand, challenge, and override the model's output in the context in which it appears. This requires interface design, explanation mechanisms, and audit trails — all of which are architecture concerns.
Meaningful audit requires that the system can, at any point, answer the question: "Why did the system produce this output, for this patient, at this time?" If the answer requires reconstructing the state of the system from partial logs, the architecture was not designed for compliance.
Where organisations go wrong
The most common failure is treating compliance as a phase rather than a constraint. The system is designed, built, and integrated. Then the compliance team reviews it. Then the gaps are identified. Then the rework begins.
Rework in a production healthcare system is expensive in every sense. It creates risk during transition, consumes engineering time, and often produces a system that is technically compliant but architecturally compromised — where compliance mechanisms were bolted onto a design that wasn't built to accommodate them.
The right approach
Compliance requirements should enter the architecture design before the first technical decision is made. Not as a checklist, but as constraints that shape the design.
Which data sources are permissible for training, and under what conditions? This determines the data architecture. What oversight mechanisms are required, and at which decision points? This determines the integration design and interface architecture. What audit granularity is required, and for how long must logs be retained? This determines the storage and logging architecture.
These are not questions for a compliance review at the end of a project. They are requirements that belong at the beginning of the architecture phase — which is why the architecture phase must exist, must be properly resourced, and must be led by someone who understands both the technical and the regulatory landscape.
The honest summary
Healthcare AI compliance is achievable. The systems that achieve it are not the ones that hired the best lawyers. They are the ones that built compliance into the architecture from the first design conversation.
The ones that didn't are still reworking.