Navigating the EU AI Act: All you need to know Part I

Author: Elija Leib, Federica Pizzuti, Oliver Deiters

Feb 28, 2024

In the ever-evolving landscape of Artificial Intelligence, the EU AI Act stands out as a milestone in global regulation. While various countries and entities like the G7, Council of Europe, and the White House have formulated diverse AI approaches, the EU AI Act emerges as a groundbreaking horizontal regulation, encompassing a vast array of AI use cases.

To equip organizations with a comprehensive understanding, let's delve deeper into the key aspects of this legislation.
The rapid evolution of advanced AI systems has posed a challenge for organizations worldwide. Many are facing the complexities of the EU AI law, the first comprehensive legislation of its kind. With the implementation set for early 2026, organizations using, providing, or developing AI systems are striving to comprehend the implications and ensure compliance with this complex new law.

What does the EU AI Act contain?


The new law aims to ensure a high level of protection of health, safety and fundamental rights in the European Union by implementing a framework based on the level of risk associated with the application the AI system is used in.

The Risk Categories

Minimal risk: This category covers applications which are not expected to have a considerable impact on health, safety or fundamental rights, like recommendation systems, AI in video games and spam filters.
Limited risk: This category contains e.g. chatbots and systems used to create deepfakes.
High risk: Applications with a high risk include applications in the following categories:
  • AI systems used as a safety component of products covered by a list of product safety laws
  • Biometrics
  • Critical infrastructures
  • Education and vocational training
  • Employment and workers management
  • Access to public and private services
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes
Unacceptable risk: The use of these applications will be prohibited in the EU. The ban will apply to systems designed to manipulate human behavior, social scoring systems, specific predictive policing applications and certain biometric systems.
As organizations explore new territory in AI governance, the EU AI Act emerges as a critical benchmark. It underscores the importance of ensuring that AI deployments aligns with evolving standards, as well as with ethical and legal considerations. Understanding these risk categories is a primary step for organizations to navigate the EU AI Act effectively.
Stay tuned. In our next article, we will delve deeper and explain what this means for your organization.