On February 2, 2025, important provisions of the EU AI Act come into effect.

You can find the most recent text of the AI Act here: https://www.eurlexa.com/act/en/32024R1689/present/text

Anyone Who Deploys AI Can Be Fined

A crucial takeaway is that not only AI developers will be affected but also any private person or company deploying AI systems. The AI Act refers to these entities as "deployers." The deployed AI may often be third-party systems.

The AI Act defines a deployer broadly as follows:

"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity"

For instance, a company using a third-party AI system for customer service, fraud prevention, internal procedures chatbot etc. would be classified as a deployer.

General Duties

  • Risk Classification: The Act categorizes AI systems into four risk levels: unacceptable, high, limited, and minimal risk. Unacceptable-risk AI systems are prohibited outright, while high-risk systems face stringent requirements.
  • Compliance Requirements: Providers and deployers of high-risk AI applications must conduct conformity assessments, ensure transparency, and maintain robust governance frameworks. They are also required to ensure that their staff possess adequate AI literacy.
  • Governance Structure: The Act mandates the establishment of a governance framework at both European and national levels, which includes the creation of an AI Office responsible for monitoring and supervising compliance.
  • Penalties for Noncompliance: Penalties for violations can be severe, ranging from €7.5 million or 1.5% of global annual turnover to €35 million or 7%, depending on the nature of the infringement.

Obligations for End User Businesses (Deployers)

  • Monitoring and Compliance: Deployers must monitor the operation of high-risk AI systems based on the provider's instructions for use. This includes ensuring that the systems operate safely and effectively within their intended context.
  • Reporting Risks: If deployers suspect that using an AI system according to the provider's instructions may lead to risks as defined by the Act, they are required to inform the provider and cease using the system until the issue is resolved.
  • Transparency Requirements: For AI systems that generate or manipulate content (e.g., deepfakes), deployers must clearly disclose that such content has been AI-generated or manipulated. This is part of broader transparency obligations aimed at ensuring users can interpret and understand AI outputs appropriately.
  • Record Keeping: Deployers need to maintain logs of the AI system's operation to ensure traceability and accountability, which can be crucial for compliance checks and audits.
  • Impact Assessments: Certain deployers, particularly public bodies and private operators providing public services, may be required to conduct fundamental rights impact assessments before deploying high-risk AI systems. This involves evaluating the potential impact of these systems in their specific contexts.

Timeline for Compliance

  • February 2, 2025: Initial compliance measures for certain provisions begin.
  • August 2, 2025: Additional obligations for users of general-purpose AI models and commencement of penalties.
  • August 2, 2026: Full application of most provisions related to high-risk AI systems.

This staggered implementation allows deployers time to adjust their practices in AI deployment.

You can see more detail timeline here: https://www.eurlexa.com/act/en/32024R1689/present/timeline

Author Of article : eurlexa Read full article