Trustworthy AI: How can we build trust in AI security?

Jun 02, 2022

Many companies see tangible scaling opportunities with this technology and it is to be expected that, in the future, most products and services will either be AI-based or artificial intelligence will have been used in their development or manufacturing. Therefore, the question is: how can we assure AI security and enable trust in the emerging technology?

Build trust in AI security

The use of AI harbors enormous potential for companies, which is shown by the figures relating to a key sub-segment of AI: machine learning (ML) and self-learning systems. According to studies, the global ML market is expected to grow ten-fold from the current level of around USD 15.5 billion to more than USD 150 billion in 2028. Many companies see tangible scaling opportunities with this technology and it is to be expected that, in the future, most products and services will either be AI-based or artificial intelligence will have been used in their development and/or manufacture.
However, when it comes to machine learning and other areas of AI, there is a growing focus on one aspect: trust in their security. This not only applies to the companies that use AI-based processes, but also to the users, who only accept innovations and new technologies if they are convinced they are secure. At the moment, AI is still just a buzzword for many people, and the extent to which this technology is already used in our everyday lives is generally underestimated.

DEKRA AI focuses on AI safety & security

As a result, the TIC* sector (*Testing, Inspection & Certificiation) is also facing new challenges. For example, the use of AI-based systems is generating new customer and market needs, which still have not been adequately defined in regulatory terms. For these reasons, DEKRA is investing in the strategic expansion of the DEKRA AI Hub so that it can assume a pioneering role as a digital TIC player.
Digital & Product Solutions is responsible for the operational management of the DEKRA AI Hub. Dr. Xavier Valero González (Head of Applied AI), has the vision of shaping regulatory issues and trialling these under practical conditions. For example, as expert, he is supporting the evolution of the regulatory and security environment for AI in Germany and internationally.

Next Steps: regulatory evolution of AI

We are committed to clear standards that are being adapted continually to reflect the state of the art. The same applies to the inspection and certification of these AI standards. Alongside its visionary work, the DEKRA AI Hub also serves as a central resource for AI initiatives within the DEKRA Group. At present, the focus is on creating an AI ecosystem concentrated on the aspects of quality and security in conjunction with manufacturers, users, and regulatory bodies.
This ecosystem can be used as the basis for implementing AI projects, optimizing existing processes or products, and creating new and more secure products and services that satisfy DEKRA’s high quality standards.