Anticipating the EU AI ACT: Framework for Trustworthy AI

Oct 06, 2023

FAQ AI ACT - Manage the complexity of AI regulatory

Artificial intelligence (AI) is one of the most influential technologies, transforming organizations and reshaping the competitive landscape. In recent months, the EU has intensified its efforts to regulate AI, promoting the development of trustworthy AI technology. This factsheet provides essential insights about the current situation of the AI Act and guides organizations on how to start preparing for compliance.

1. What is the AI Act and why is it significant?
The AI Act refers to the legislation in Europe aimed at regulating artificial intelligence technologies. It's significant because it seeks to establish a framework for trustworthy and accountable AI use.
2. Why there is an urgency to regulate AI?
There are AI applications which could pose tangible risks to people’s lives and threaten human rights. These need to be controlled and regulated. Moreover, the sudden arrival of generative AI tools like ChatGPT has increased resulting in public concern and political pressure towards AI regulation.
3. How does the EU AI Act regulate AI? Does it treat all AI applications in the same way?
The EU's AI Act categorizes AI applications into different levels, associated with their potential risk: unacceptable risk, high risk, limited risk, or little to no risk. Unacceptable risk application usage is forbidden according to the regulation, whereas different requirements are established for high and limited-risk applications.
4. What repercussions can companies face if they do not comply with the EU AI Act?
Non-compliant companies may face significant penalties, potentially incurring fines of up to 7% of their global annual revenue, as set by the Parliament. It is crucial for businesses to strictly adhere to these regulations in order to avoid such substantial financial consequences.
5. What are examples of high-risk AI applications?
High-risk AI applications are those that could unintentionally cause financial or physical harm as well as on risks to fundamental right. This could include influencing access to essential services like social benefits, housing, credit, healthcare, safety and security incidents or employment.
6. What are the requirements for high-risk AI applications under the AI Act?
The AI Act outlines a set of requirements including the need to carry out risk assessments, conducting conformity assessments, have a quality management in place fulfilling data protection and transparency requirements, among others.
7. What is the timeline for the EU’s AI Act to become law?
Since the European Parliament voted in June 2023 on its negotiating position, the two versions of the AI Act – one from the Council and one from the European Parliament – are currently being negotiated. It is expected that by mid-autumn 2023, the draft rules become legislation, with a one to two-year grace period before enforcement, depending on the case.
8. Is Europe the only country working on AI regulation?
No, other countries like China, US, Singapore Australia, etc. are addressing AI regulation following different approaches, from more strict regulations to softer good practices or guidelines.
9. How does the AI Act impact companies deploying AI globally?
Companies will need to ensure compliance with the regulations in each jurisdiction they operate, depending on where. This could potentially lead to a complex regulatory landscape that may require integrated solutions in their AI technology, over-regulation could potentially stifle AI use.
10. How can companies prepare for AI regulations?
Companies, especially those implementing AI models, should start by establishing a responsible AI program, including an AI Quality Management System. This includes defining AI Policies and objectives, documentation of all the AI resources in the company, as well as the procedures along the complete AI lifecycle and understanding the broad requirements of compliance. Starting now is essential, as it may take time to adapt to regulations.
11. Who is affected by the AI Law?
The EU AI Act affects companies and organizations that develop, deploy, or use AI systems in the European Union. It applies across various sectors including healthcare, finance, social services, housing, credit, and more. The audience affected includes businesses, developers, policymakers, and consumers who interact with AI-powered systems.
12. How to get prepare for upcoming AI Regulations?
  • Understand the specific requirements outlined in the AI Act, such as disclosure, certification, transparency, and post-deployment documentation.
  • Establish an AI Quality Management within your organization.
  • Develop processes for documentation, risk identification, mitigation, and validation.
  • Stay informed about updates and changes in the regulatory landscape.
Would you like to learn more?
Contact us to discover DEKRA's first generation of AI Testing & Certification Services. Let's drive the AI technology revolution and shape a safe, secure and trustworthy future.