The European Commission considers pause on AI Act’s entry into application
Since the publication of its original proposed draft in April 2021, the EU Artificial Intelligence Act (AI Act) has received a mixed welcome from the AI industry. While several industry experts, international organizations, and EU Member State governing bodies have lauded its approach to establishing guardrails for the responsible development and use of AI, others (including some Member States) have criticized what they describe as regulatory overreach, heavy burdens, uncertain interpretations, and/or reliance on not-yet-developed resources. This has led to experts and organizations in the industry calling for the EU to reassess its approach to AI regulation.
Staggered application
The AI Act implements a staggered approach to application that engages different provisions over different periods of time. In February 2025, the provisions on “prohibited AI practices” (technologies and uses of AI that are prohibited in the EU due to their perceived substantial potential for harm of individuals and their fundamental rights or safety), took effect. The main body of provisions, including those on high-risk AI systems and systems presenting transparency risks, are intended to take effect on August 2, 2026. Should a pause in application take place, it is these provisions that will likely be the primary focus.
This is of particular interest to corporate users of AI. For most businesses, the list of prohibited AI practices is unlikely to materially impact day-to-day activity. However, AI use cases categorized as high-risk cover a broader array of typical business activities, including AI used in recruitment or job allocation and AI used in credit checking or scoring. Therefore, those racing to implement appropriate controls for such systems will welcome any prospect of additional time.
A pause for contemplation
In May 2025, reports came out suggesting the European Commission might postpone application and enforcement of certain provisions of the law yet to take effect. Several reasons, as outlined below, are likely contributing factors to these discussions. To be clear, at the time of writing, any postponement remains a subject of discussion, and no formal plans have been tabled. As things stand, those designing compliance programs ought to be implementing them to meet the AI Act's published timetable rather than banking on the possible delayed entry into force.
However, stemming from a Polish-led initiative, the subject will be discussed in a meeting of EU ministers in the Telecom formation of the Council of the EU, scheduled for next week. The proposal reportedly includes (a) a pause in the entering of force of the AI Act as long as technical standards are not developed, (b) an expansion for the exemptions for small and midsize enterprise (SME) under the AI Act’s high-risk regime, (c) the introduction of waivers for low-complexity AI systems that would otherwise require third-party assessments, and (d) the creation of a cross-regulatory forum to ensure consistency across EU digital regulation.
Furthermore, please note that these latest proposals to modify the application and enforcement of the AI Act take place in the broader context of massive efforts carried by the European Commission to push for simplification on several fronts (including the AI Act). With the so-called "Omnibus packages," the EU Commission is trying to meet industry's demands and streamline part of its most complex legislation, aiming to boost the EU's competitiveness in an increasingly geopolitically fragmented scenario. The simplification ranges from the EU's green agenda – and most notably its reporting, due diligence, and sustainable finance regulation (Omnibus I and II) [1] – to agricultural legislation (Omnibus III), product regulation and digitalization (Omnibus IV), as well as defense (Omnibus V). For instance, the simplification exercise will also touch upon General Data Protection Regulation (GDPR) and streamlining provisions for SMEs.
Specifically, regarding the AI Act, the EU has recently gathered feedback through a public consultation on the main challenges arising from the implementation of the regulation. As announced by the AI Office, this feedback will inform a potential upcoming AI Act simplification exercise.
Industry pressure
Since 2021, the European Commission has received continued and mounting pressure from industry to reconsider its approach, including watering down provisions or doing away with the regulation altogether. Many concerns, particularly those in recent months, have focused on the hurdles the AI Act may put against innovation, often associated with lack of certainty in application of the stringent compliance requirements and the high barriers to entry for smaller organizations. These concerns have persisted despite specific provisions in the regulation to assist SMEs, such as reductions in costs and fees, simplified compliance measures, and accompanying guidance materials on proposed interpretation. For more on this topic, please see our previous alert.
Others have focused their concerns on the European Commission’s supplementary compliance materials, such as the General-Purpose AI Code of Practice, which some critics say has extended the law beyond its original purpose and would unfairly burden organizations with broad-reaching development and use of AI in the bloc.
International pressure
International opinion regarding the AI Act is also evolving and the legislation’s position as the “international gold standard for AI regulation” is now in question. Changes of administration both in EU Member States and major trading partners, as well as a few teething problems with the implementation of initial provisions, have played their part in this shift in attitudes. The US government, for example, has strongly voiced its disapproval of the EU approach, citing its potential impediment in industry innovation. On February 11, 2025, at the Paris AI Summit, US Vice President JD Vance cautioned governments that excessive regulation of AI “could kill a transformative industry.” The Vice President’s words mark a strong preference under the US’ current administration for free market innovation over heightened regulation, to which the recently proposed AI regulatory moratorium for US state and local entities currently being discussed in Congress adds support.
Furthermore, the US Mission to the EU has recently sent a feedback letter to the European Commission providing comments on the current draft of the Code of Practice that suggest streamlining it and deleting certain provisions. The disparity between the US and EU approach to AI regulation, and the potential impact on US organizations, is likely to be subject to discussion in any future negotiations between the parties as they seek to avoid wider impacts to trade, such as a potential tariff increase on EU goods. Growing pressure at an international level to pause the AI Act’s application until these details are ironed out is therefore expected.
Operative delays
One of the biggest pressures on the AI Act and a primary reason for calls to pause its application is timing. Even before the text was finalized, several Member States and international technology organizations voiced their concern regarding the speed at which key provisions were being negotiated. For some, this led to unclear provisions and ambiguous regulations since the technology kept changing under its feet. Now that the regulation has been enacted, many organizations with operations in the EU are voicing the opposite concern, stating that the promised release of critical elements of guidance have been delayed and cannot come fast enough to facilitate effective preparation.
Originally intended for release on May 2, 2025, the General-Purpose Code of Practice has been delayed, citing industry discontent with its overall development. The final deadline under the AI Act for this deliverable is August 2, 2025. In similar fashion, many of the harmonized standards under development by CEN-CENELEC that enable organizations to demonstrate compliance with the AI Act are also delayed. Originally, these key technical and operational standards were due for implementation in August 2025 – this has now been pushed back until well into 2026 (leaving little time for organizations to begin implementing the recommended controls before the next wave of rules come into force). Even critical guidance on interpretation, such as interpretation of high-risk AI system provisions, has been delayed. For example, guidance on prohibited AI practices was only released on February 4, 2025 – two days after the provisions on prohibited AI practices came into effect. This gave organizations little time to consider the guidance before the regulation came into effect. With so many key components lagging behind their intended times for implementation, critics continue to voice that the AI Act is not yet fit for purpose in regulating AI and will not be until all its components are drafted, finalized, and shared with organizations for implementation.
What happens next?
With the matter of pausing application only at the discussion phase, it is currently unclear whether a formalized proposal will be presented. However, upcoming discussions in the Council of the EU may indicate that the first steps that could lead to a formal approach have been taken. The EU may take these ongoing pressures as an opportunity to reassess its approach to AI regulation as it continues implementation of a broader suite of technology regulation as part of its digital decade. This does not, however, prevent organizations from continuing to align themselves with closely associated international best practices, such as the National Institute of Standards and Technology’s AI Risk Management Framework or the International Organization for Standardization’s standards, which recommend similar controls (without the regulatory requirements) for managing AI risks. This will help organizations to pivot their internal controls more quickly when existing and future regulations cohere, both within and outside of the EU, as the industry and legislative environment continue to develop.
Even if the EU does decide to pause the application of some parts of the AI Act, it does not necessarily mean that those implementing AI should see risk management, governance, and the application of effective controls as any less urgent. The wider risks outside EU regulation remain to be addressed in any event – however welcome that sliver of breathing space might be.
Find out more
DLA Piper’s team of AI lawyers, data scientists, and policy experts assists organizations in navigating the complex workings of their AI systems to guide compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impact on industry across the world.
To follow updates on AI in the EU, please refer to the European Commission’s Press corner, and the page of the European AI Office.
For an overview of executive actions taken thus far by the new administration, please see DLA Piper’s Trump Executive Orders resource hub.
For more information on AI and the emerging legal and regulatory standards, please visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI strategy through our AI ChatRoom series.
Sign up for our forthcoming EU Tech Summit, where AI is very much on the agenda, here.
For further information or if you have any questions, please contact any of the authors.
[1] Corporate Sustainability Reporting Directive (CSRD), Corporate Sustainability Due Diligence Directive (CSDDD), Taxonomy regulation, and Carbon Border Adjustment Mechanism (CBAM).