Circuit board

1 April 2026

Diritto intelligente – Issue N. 17

This month’s issue makes one point very clear: the rules are not catching up with AI fast enough - but they are not stepping back either.

Take the discussion around the definition of personal data under the Digital Omnibus' changes to the GDPR. The decision not to change it may look technical, but in practice it keeps things exactly where they are today: uncertain. For companies training AI models, this means continuing to operate in a grey area where datasets are difficult to qualify as anonymous, and where relying on legitimate interest remains a case-by-case exercise with no real comfort at scale.

The French Conseil d’État decision on advertising identifiers reinforces this. If pseudonymous identifiers can still be treated as personal data because re-identification is not impossible, then many datasets currently used in both AdTech and AI training will remain within the GDPR. That has a very concrete impact: more compliance work, more documentation, and more exposure if things are challenged.

On the impact of the postponement of certain obligations under the AI Act, this scenario is not a softening of the framework. It is recognition that implementation is harder than expected. But the direction of travel has not changed. If anything, it is becoming clearer that companies will be expected to show how their systems are governed, not just how they are designed.

The sector insights in this issue confirm the same pattern. The EIOPA survey shows that AI is already widely used in insurance, but mostly in controlled environments. The hesitation is not about technology - it is about governance, skills, and regulatory clarity. The same applies in financial markets, where ESMA’s guidance on algorithmic trading highlights a growing gap between traditional control frameworks and AI systems that evolve over time.

There is also a shift in enforcement expectations. The joint statement on AI-generated imagery suggests that regulators are looking beyond formal compliance and focusing more on safeguards that actually work in practice - especially where risks for individuals are high.

Put all of this together, and the message is quite simple. The framework is not becoming easier to navigate. It is becoming more demanding, more interconnected, and more dependent on how companies organise themselves internally.

The real question is no longer whether AI can be used. It is whether it can be used in a way that can be explained, justified, and defended.

Print