Call us on +(33)4 28 70 91 81

How to comply with the AI Act?

Summary

1. Why comply with the AI Act?

Adopted on May 31, 2024, the AI Act (or European regulation on artificial intelligence) imposes new obligations on all organizations developing or using AI systems in Europe. A legal issue, but also a strategic one, compliance with the AI regulation allows:

  • To avoid heavy financial sanctions (up to 35 million € or 7% of global turnover);
  • To strengthen the trust of customers, partners and collaborators;
  • To promote ethical and sustainable innovation based on transparency and security.

2. Key steps for effective compliance

Step 1: Raise awareness among teams

Compliance begins with collective awareness. Each employee must understand the challenges of artificial intelligence and the requirements of AI regulations.

Best practices:

  • General training via an AI Act e-learning;
  • Specific sessions for technical, business and legal teams;
  • Creation of an internal AI ethics charter;
  • Regular communication (newsletter, workshops, videos...).

Step 2: Designate an AI Act Driver

An AI compliance manager is essential to coordinate actions.

🎯 Possible profiles:

  • DPO (Data Protection Officer)
  • Lawyer specializing in AI/GDPR
  • Compliance or digital ethics manager

He must master AI, GDPR, cybersecurity issues, and have an internal network of referents and means to carry out the necessary actions.

Step 3: Map your artificial intelligence (AIS) systems

Before acting, it is necessary to identify all AI systems used or developed in the organization. This mapping allows:

  • List internal or integrated AIs (chatbots, scoring, prediction, generative AI...);
  • Classify each system according to the level of risk defined by the AI Act:
    • Unacceptable risk (prohibited)
    • High-risk AI (highly regulated)
    • General-purpose AI (GPT type models)
    • Low risk AI (minimal transparency)
  • Prepare future documentary and technical obligations.

Step 4: Bring the identified systems into compliance

Once the AIS have been mapped, it’s time for action:

For high-risk AIs:

  • Impact analysis on AI risks (health, security, fundamental rights)
  • Complete technical documentation (article 11)
  • Data governance (quality, security, representativeness – art. 10)
  • Transparency and human supervision (art. 13-14)
  • Journaling, CE marking and EU registration (art. 19 and 48)

For all systems:

  • Deploy an AI quality management system (policies, procedures, proof of compliance)
  • Plan regular internal and external audits

The goal is to demonstrate compliance of every AI system used in the organization at all times.

Step 5: Apply the 7 principles of responsible AI

The AI Act regulation is based on fundamental values defined by the European Expert Group (GEHN). These 7 ethical principles guide the action of all entities:

Principle Goal
Societal and environmental well-being Prevent damage to people and the planet
Transparency and explainability Make AI understandable and questionable
Data protection Respect privacy and secure data
Robustness and security Avoid errors, failures or deviations
Responsibility Take responsibility for decisions and their impacts
Justice and equity Fight against bias, promote inclusion
Human control Preserve human autonomy and dignity

These principles must permeate all phases of the AI life cycle, from design to updating.

3. Conclusion: from conformity to trust

Compliance with the AI Act is about more than checking boxes: it's an opportunity to create reliable, ethical and secure AI, aligned with the expectations of users, regulators... and the market.

By following the 5 key steps (awareness, management, mapping, compliance process, fundamental principles), each organization can transform a regulatory constraint into a sustainable competitive advantage.