Skip to content Skip to footer

The AI Act: The First Global Regulation on Artificial Intelligence

The European Union has set a global precedent in the regulation of Artificial Intelligence (AI) with the entry
into force, on August 1, 2024, of Regulation (EU) 2024/1689, better known as the AI Act. This pioneering
legislative act aims to ensure that AI systems are safe, ethical, and trustworthy, while promoting innovation
and the creation of a single digital market within the EU. The primary goal is to balance technological
progress with the protection of fundamental rights and European values, potentially setting a global
standard, much like the General Data Protection Regulation (GDPR) did for data privacy.
The Regulation is part of a broader community-level digital regulatory framework, alongside acts such as
the DSA, DMA, DGA, and DA, but it stands out as the only legislation entirely dedicated to governing AI
itself.

A Risk-Based Approach: The Four Categories

The AI Act adopts a risk-based approach , classifying AI systems into four categories, each corresponding to
different regulatory obligations:

  1. Unacceptable Risks (Prohibited) :
    -Systems that pose a clear threat to people’s safety, rights, or livelihoods.
    -Uses such as cognitive-behavioural manipulation, “predictive policing,” and emotion recognition in the
    workplace or educational institutions are explicitly banned. Real-time use of remote biometric
    identification systems in public spaces is also prohibited, with some exceptions.
  2. High Risks (Strictly Regulated) :
  • Systems with the potential to significantly impact health, safety, and fundamental rights (e.g., medical
    diagnoses, self-driving vehicles, personnel selection).
  • Before being placed on the market, they must undergo a conformity assessment (internal or by a third-
    party notified body) and meet stringent requirements in terms of rigorous testing, transparency, and

human oversight.

  1. Limited Risks (Transparency Obligations) :
  • Systems like chatbots or content generators (Deepfakes).
    -They are primarily subject to transparency obligations, specifically the requirement to inform the user
    that they are interacting with an AI system or with artificially generated content.
  1. Minimal or No Risks (Free Use) :
  • The majority of AI systems (e.g., spam filters, video games).
  • They are not subject to obligations beyond existing sectoral legislation. The EU nevertheless encourages
    the voluntary adoption of codes of conduct.

Rules for General-Purpose AI Models (GPAI) and Governance

The Regulation introduces specific rules for General-Purpose AI Models (GPAI) (such as text or image
generating models), which must comply with transparency obligations. If these models pose a systemic risk
(due to their power and widespread use), they are subject to more stringent obligations to ensure their
safety.
To ensure correct application, the AI Act establishes a robust European-level governance architecture,
which includes the creation of an AI Office within the European Commission and a European AI Board,
composed of representatives from the Member States. At the national level, Notifying Authorities and
Market Surveillance Authorities are designated.

Finally, administrative pecuniary penalties (fines) are imposed for non-compliance, calculated as a
percentage of the company’s total annual global turnover, with proportionality for SMEs and startups.

AI Act Applicability Timeline (Reg. EU 2024/1689)

Regulation (EU) 2024/1689 entered into force on August 1, 2024, but the full application of its obligations is
staggered to grant operators (providers and deployers) sufficient time for compliance.


Date of Applicability | Provisions Applied | Key Points |


Date of Applicability :February 2, 2025 (6 months) |
Provisions Applied: Prohibitions and AI Literacy (Chapters I and II)
Key Points : The most urgent rules come into force: prohibition of AI practices with unacceptable risk (e.g.,
social scoring, behavioural manipulation, emotion recognition in certain contexts). Obligation to promote AI
literacy.


Date of Applicability : August 2, 2025 (12 months)
Provisions Applied : Governance, General-Purpose AI Models (GPAI), and Penalties
Key Points : Obligations for providers of GPAI models (including generative AI), particularly those that
pose a systemic risk. The governance architecture (AI Office and Board) and rules on penalties become
applicable.


Date of Applicability: August 2, 2026 (24 months)
Provisions Applied : Full Application (General Rules and High-Risk)
Key Points : Most provisions become fully applicable, particularly the stringent obligations for High-Risk
AI systems (with the exception of those in already regulated products). Obligation to implement regulatory
sandboxes.


Date of Applicability : August 2, 2027 (36 months)
Provisions Applied :
High-Risk in Existing Products
Key Points : Application of obligations for High-Risk AI systems that are safety components in products
subject to other legislation (e.g., medical devices, motor vehicles).


The EU also continues to support innovation through the creation of regulatory sandboxes, controlled
environments for the experimentation of innovative AI systems, demonstrating a balanced approach
between regulation and the encouragement of technological progress.

Leave a comment