Call us on +(33)4 28 70 91 81

Everything you need to know about the AI Act (European Regulation on Artificial Intelligence)

What is the AI Act?

The AI Act, or European regulation on artificial intelligence, is legislation adopted by the European Union in 2024. It aims to supervise the development, deployment and use of artificial intelligence systems to guarantee ethical, secure AI that respects the fundamental rights of European citizens.

Who does the AI Act apply to?

The AI Act applies to:

  • All companies based in the European Union that develop or use AI systems;
  • All non-EU organizations whose AI systems are used in the EU;
  • All public and private entities, regardless of their size, when they market or operate an AI system.

Which AI systems are affected by the AI Act?

The AI Regulation classifies AI systems into four risk levels:

  1. Unacceptable risk: AI prohibited (e.g. social scoring, cognitive manipulation...)
  2. High-risk AI: highly supervised (e.g.: security, health, education, HR...)
  3. General-purpose AI: such as GPT or Copilot type models
  4. Minimal or low risk: light transparency obligations

Who are the main actors affected by the AI Act?

The regulation defines several roles:

  • Supplier: the one who develops or causes to be developed the AI system
  • Deployer: the one who uses AI in his organization
  • Importer, distributor, agent: all links in the chain are affected

Everyone has different obligations depending on their role and the level of risk of the system used.

What are the AI Act's obligations for high-risk AIs?

AIs classified as high risk must:

  • Be subject to a risk analysis (art. 9)
  • Be technically documented in detail (art. 11)
  • Ensure data quality (art. 10)
  • Allow human control (art. 14)
  • Guarantee safety and robustness (art. 15)
  • Affix a CE marking and register the system in a European database (art. 48)

What is the difference between the AI Act and the GDPR?

The GDPR protects personal data while the AI Act regulates artificial intelligence systems.

The two regulations are complementary: an AI system using personal data will have to comply with both the GDPR and the AI Act.

What are the penalties for non-compliance with the AI Act?

Sanctions can go up to:

  • 35 million euros or 7% of global turnover for the most serious violations;
  • Warnings, market withdrawals or restrictions on use.

How do I know if my organization is affected by the AI Act?

You are concerned if:

  • You develop or use an AI system;
  • You are a supplier, user or integrator of an AI product;
  • You offer your services in the European Union.

The risk-based approach is at the heart of the regulation: a map of your AI systems is essential to determine your obligations.

How to comply with the AI Act?

Here are the recommended steps:

  1. Raise team awareness (e-learning, responsible AI charter)
  2. Designate an AI Act driver (often the DPO or an AI referent)
  3. Map your AI systems
  4. Qualify the risk level of each AI
  5. Implement the required technical and documentary measures
  6. Establish AI governance and a quality management system

What ethical principles guide the AI Act?

The AI Act is based on 7 main principles:

  • Societal and environmental well-being
  • Transparency and explainability
  • Data protection and privacy
  • Technical robustness
  • Responsibility
  • Justice and equity
  • Autonomy and human control

These values should guide each stage of your AI lifecycle.

What tools exist to support companies in their compliance with the AI Act?

You can:

  • Use AI Act compliance software (AI registry, risk analysis, documentation...)
  • Be supported by GDPR/AI experts
  • Raise awareness among your employees with an AI Act e-learning