A small appreciation of the EU's regulatory speedrun
The European Union has a habit of shipping regulatory frameworks for technologies the rest of the world is still figuring out it needs to regulate. GDPR. The Digital Services Act. The Digital Markets Act. The cookie banners that now accompany every web browsing experience on Earth, forever, until the heat death of the universe.
And now the AI Act.
You can roll your eyes at the pace of EU regulation, but it has an actual effect. GDPR rewrote how the entire internet handles personal data, including for companies that never planned to do business in Europe. The AI Act is shaping up to do the same for AI systems. Whether or not you have EU customers today, the frameworks that pass in Brussels tend to become the operating system for compliance everywhere else.
So. Four months from now, a lot of the AI Act lands. Here is what engineers actually need to have shipped before then.
What August 2, 2026 actually triggers
August 2, 2026 is when the obligations for high-risk AI systems under Articles 9 through 49 become enforceable. Any high-risk system deployed in the EU, or offered to EU residents, has to be compliant by that date. Penalties for non-compliance scale up to the more punitive of a percentage of global turnover or a fixed ceiling in the tens of millions of euros.
The key word is high-risk. The Act defines four risk tiers:
- Prohibited. Social scoring by governments, cognitive behavioral manipulation. You are either doing these or you are not. If you are not sure, you are not.
- High-risk. AI in education, employment, critical infrastructure, law enforcement, medical devices, credit scoring, biometric identification, and more. This is the tier with the real engineering burden.
- Limited-risk. Chatbots, deepfake generators. Mostly disclosure requirements.
- Minimal-risk. Everything else. No specific obligations.
If you ship AI in one of the high-risk domains, or your AI system is a safety component of a regulated product, you are in scope. A lot of companies are in scope and do not yet know it.
The seven engineering requirements that matter
Strip away the legal language and a high-risk AI system needs the following in production by August. None of these are optional.
1. A documented risk management system
Articles 9 and related. Not a one-time spreadsheet. A living process that identifies known and foreseeable risks, estimates their likelihood and impact, and prescribes mitigations, updated continuously across the system's lifecycle.
What this means in practice: a risk register per AI system, reviewed at a documented cadence, with evidence that mitigations are actually implemented. Your existing enterprise risk management process can be extended. Starting from zero is harder.
2. Data governance for training and evaluation
Article 10. Training, validation, and test data has to be relevant, representative, and as free from errors as possible. Bias in data gets flagged, documented, and mitigated. Lineage from raw data to trained model has to be traceable.
For teams training their own models, this is a data pipeline concern. For teams fine-tuning or using foundation models, it is more about validating that your eval and fine-tune datasets meet the bar, and that the foundation model's provider has credible documentation of their training data practices.
3. Technical documentation
Article 11 and Annex IV. A detailed dossier covering architecture, training methodology, performance metrics, known limitations, testing results, and risk mitigations. Machine-readable format preferred. Regulators can request it.
If the phrase "model card" already makes sense in your org, you are partway there. A model card is a good starting point but typically not sufficient. Annex IV is more thorough.
4. Record-keeping and automatic logging
Articles 12 and 19. High-risk systems have to log automatically throughout their operation. Logs have to enable traceability of system function, identification of situations that may lead to risk, and monitoring of accuracy. Logs have to be retained for a defined period, typically six months or more depending on the system.
Engineering-side: this is immutable application-level logging. Every inference call, inputs, outputs, confidence scores, model version, user identity when applicable. Structured. Indexed. Queryable. Retained in tamper-evident storage.
If your current observability stack already handles this at scale, the lift is to extend it to AI-specific events. If you have nothing, it is the biggest engineering project on this list.
5. Transparency and information provisioning
Article 13. Users of the AI system get clear information on what the system does, its capabilities and limitations, its performance characteristics, and the conditions under which it should be used.
In practice, this is user-facing documentation and in-product disclosure. A page on your site is not enough. The disclosure has to be contextual where relevant.
6. Human oversight
Article 14. The system has to be designed so that humans can effectively oversee it. Intervention points where a human can monitor performance. Ability to interpret outputs. Ability to stop, correct, or override operations. For full-automation scenarios, special care on the design of oversight.
This is both architectural (the system has to expose the surfaces) and procedural (the humans have to be trained and present). Engineers tend to underestimate the procedural half.
7. Accuracy, robustness, and cybersecurity
Article 15. The system has to achieve an appropriate level of accuracy, be resilient to errors and inconsistencies, and be secure against attempts to manipulate it. Accuracy metrics have to be declared and measured. Adversarial robustness has to be tested.
For LLM-based systems, this includes resistance to prompt injection, data extraction attacks, and jailbreaks. For predictive systems, it is more about distribution shift, adversarial inputs, and calibration. Both need eval suites that cover these specifically.
The conformity assessment, CE marking, and EU database
Before a high-risk system is placed on the market in the EU, it needs:
- A conformity assessment (self-assessment for most categories, third-party assessment for some).
- Technical documentation finalized under Annex IV.
- CE marking affixed to the product or its packaging.
- Registration in the EU database for high-risk AI systems.
This is process, not engineering, but the engineering outputs feed into it. If your technical documentation is not ready by July, your CE marking cannot be affixed by August.
The retrofit tax
Retrofitting EU AI Act compliance into a system after launch costs roughly three to five times what building it in would have cost. The reason is not mysterious. Immutable logging is an architectural concern, not a feature. Human oversight surfaces have to be designed in, not bolted on. Risk management processes shape how you decide what to ship, not just how you document it.
Teams that fail their first conformity review are almost always teams that tried to wrap compliance around a system that was not built for it. Teams that pass are almost always teams that started with the requirements in view.
If you are four months out and have not started, the retrofit math is going to hurt. If you are starting fresh, the marginal cost is much lower than it looks.
A pre-deadline checklist
For an engineering team shipping a high-risk AI system to EU users before August 2, 2026:
- Confirm scope. Read Annex III. If your system is high-risk, own that. If you are not sure, get legal involved now, not in July.
- Risk register started, reviewed, and stored. Your first version does not need to be comprehensive. It needs to exist and be updated.
- Technical documentation drafted. Annex IV structure. Model cards plus architecture plus data lineage plus eval results.
- Logging verified. Pull up your log store. Can you produce a record of every inference for the last 30 days, with inputs, outputs, and model version? If not, fix now.
- Eval suite includes adversarial. Prompt injection, data extraction, distribution shift. Not just golden-path accuracy.
- Human oversight designed. Where can a human stop, review, or correct? Are those surfaces in the product today?
- User transparency reviewed. Does the product tell users what they need to know, in context, not buried on a terms page?
- Conformity path identified. Self-assessment vs. third-party. If third-party, notified body engaged.
- EU database registration planned. Target registration date on the calendar.
Nine items. Each is weeks of work, not days. Hence four months is tight if you have not started, and uncomfortable even if you have.
The wider pattern
The AI Act will not be the last major AI regulation. The US is building a patchwork at the state level, with Colorado's AI Act and New York's automated decision systems rules already in force. The UK's AI governance framework is consolidating. Japan, Korea, and Singapore all have something coming.
The specific clauses will differ. The structure will not. Risk classification. Technical documentation. Logging. Human oversight. Accuracy and robustness testing. Build for one well and the others become extensions, not rewrites.
The EU gets there first. It usually does. You can either fight the retrofit tax in July or do the work now. We recommend now.
Kai Token leads AI governance work at Fraktional, helping engineering teams navigate AI compliance frameworks without slowing their shipping cadence. Still waiting for the cookie banner to finally go away.