AI Act: what is it? Understand everything about the new European regulation on artificial intelligence
Summary
- Introduction: Europe facing the rise of AI
- What is the AI Act?
- Who does the AI Act apply to?
- The main principles and levels of risk
- Obligations in the event of high-risk AI
- Transparency, documentation, CE marking
- Sanctions under the AI Act
- Conclusion: towards ethical, regulated and responsible AI
1. Introduction: Europe facing the rise of AI
Artificial intelligence is revolutionizing our lives: health, transport, finance, education, justice... no sector escapes it. Faced with these upheavals, the European Union intends to become a world leader in trustworthy AI, respectful of fundamental rights and digital security.
It is in this context that the European regulation on artificial intelligence —known as the AI Act, RIA, or AIA— comes into play.
2. What is the AI Act?
Adopted in 2024 and applicable from August 1, 2024, the AI Act constitutes the first legislation in the world globally governing the development, marketing and use of artificial intelligence systems.
Main objective: ensuring that AI systems deployed in the European Union are secure, ethical, transparent and non-discriminatory.
This regulation is part of a responsible technological governance approach, in complementarity with the GDPR, the DORA (financial sector), and the Cyber Resilience Act (CRA).
3. Who does the AI Act apply to?
The AI Act applies to any public or private organization, regardless of its size or country of origin, as long as it:
- develops, markets, deploys or uses an AI system in the EU;
- or that its systems are used within the EU, even if the company is located outside Europe.
Are concerned: start-ups, SMEs, mid-sized companies, large companies, public administrations, research establishments, AI solution providers, integrators, distributors, importers...
The regulation is evolving: the EU will be able to adapt it over time to keep up with technological advances.
4. The main principles and levels of risk
The AI Act is based on a classification of AI systems according to their level of risk:
| Risk level | Description | Example |
|---|---|---|
| ❌ Unacceptable risk | Forbidden | Social scoring AI, mass biometric surveillance, cognitive manipulation |
| ⚠️ High risk | Highly regulated | Automated recruitment, academic grading, CV sorting, facial recognition |
| ⚙️ Limited risk | Transparency obligations | Chatbots, generative AI |
| ✅ Minimal risk | Free use | Video games, recommendation filters |
High-risk AIs are at the heart of the regulation: they are authorized, but governed by numerous obligations.
5. Obligations in the event of high-risk AI
Organizations that design or use high-risk AI systems will:
- Carry out an AI risk analysis (health, security, individual freedoms);
- Implement complete technical documentation;
- Implement a quality management system;
- Provide human supervision;
- Guarantee the accuracy, security and robustness of data;
- Keep an event log (logs);
- Ensure enhanced transparency among users;
- Declare conformity, affix the CE marking and register the system in the European database.
These obligations also apply to foundation models and to Generative AI if they are integrated into high-impact uses.
6. Transparency, documentation, CE marking
Even limited risk systems must meet certain obligations:
- Inform the user that they are interacting with an AI;
- Clearly explain how the system works (especially for generative AIs);
- Document potential biases or limitations.
The CE AI marking will become a guarantee of regulatory compliance, similar to that imposed in the industrial or medical sectors.
7. Sanctions under the AI Act
| Type of violation | Maximum penalty |
|---|---|
| Prohibited practices | 35 M€ or 7% of global annual turnover |
| Non-compliance with high-risk AI obligations | 15 M€ or 3% of annual global turnover |
| Lack of cooperation or information | 7.5 M€ or 1% of turnover |
These amounts apply globally, even for companies located outside the EU.
8. Conclusion: towards ethical, regulated and responsible AI
The AI Act ushers in a new era for artificial intelligence in Europe: an era where innovation rhymes with responsibility, where technology is put at the service of citizens, businesses and institutions while respecting fundamental rights.
Get compliant today with the AI Act, it means anticipating changes, securing your AI projects, and gaining the trust of your users, customers and partners.