Certified AI “made in Germany”

Cooperation between Fraunhofer IAIS and the German Federal Office for Information Security BSI to develop technical test procedures for the certification of artificial intelligence systems

Press release

The press release on the occasion of the newly established working group between Fraunhofer IAIS and BSI (08.07.2021)

Making AI secure and trustworthy

Artificial intelligence is an important key technology of the present. Intelligent systems can be used in almost all areas of life and are already capable of performing many tasks faster and more reliably than humans. In order for companies to achieve decisive competitive advantages by using AI, AI systems should be trustworthy and function reliably. This requires verifiable technical standards and norms that enable a neutral evaluation of the systems and that also provide information to users and consumers about the assured properties of AI technologies.

To advance the development of AI certification “made in Germany”, Fraunhofer IAIS and the German Federal Office for Information Security (BSI) have signed a cooperation agreement for the joint development of test procedures. The aim of the cooperation is to develop test procedures that can serve as a basis for technical standards and norms.

Implementation of the cooperation in the AI.NRW flagship project “Certified AI    

The development of the test methods is carried out in the AI.NRW flagship project »Certified AI«, which will start as an initial project of the cooperation in early 2021. The state-funded project is based on a broad participation process in order to ensure that the test methods developed are suitable for practical application and marketability. In industry- and technology-related user groups, the participants define concrete needs, establish criteria and benchmarks for testing in practice and conduct pilot tests. The broad participation process combines the know-how of the stakeholders and ensures that the procedures develop into generally accepted standards for AI systems and their verification. Renowned partners from research and industry, including the University of Bonn, the University of Cologne, the RWTH Aachen, the German Institute for Standardization DIN, as well as numerous DAX 30 and other companies from various sectors such as telecommunications, banking, insurance, chemistry, and retail, are working together in the project.

Flagship updates

© PaulShlykov – stock.adobe.com / Oleksii – stock.adobe.com / Fraunhofer IAIS
KI.NRW flagship CERTIFIED AI at Hannover Trade Fair 2024
©Alex – stock.adobe.com
New: “Develop trustworthy AI applications with foundation models”
“CERTIFIED AI” organises international workshop on trustworthy AI standardization in Singapore
© M.Doerr & M.Frommherz GbR
Press release: DIN – New standard for AI explainability
© KI.NRW
Event – 13 June 2023
Certified AI Forum
© KI.NRW
Workshop • March 10, 2022
Trustworthy AI Services Evaluation Criteria and Test Procedures
© KI.NRW
Trustworthy autonomous driving
In Fraunhofer Magazine, Dr. Michael Mock talks about how autonomous driving can become trustworthy and safe
© KI.NRW
Inspection catalog
© KI.NRW
Fraunhofer Podcast
The flagship project under discussion
© KI.NRW
Workshops
As part of the cooperation with DIN various activities are planned

Comments on the cooperation agreement

Prof. Dr. Andreas Pinkwart

 

“With our outstanding competencies and the strong AI.NRW network, North Rhine-Westphalia can play a leading role in the further development of the economy and society. In order to achieve this, we make the use of artificial intelligence trustworthy and secure. Independent certification of AI systems helps us to do this: It strengthens confidence in modern IT technology and is also recognized internationally as an important competitive advantage. With the development of marketable test procedures, we are approaching our goal in large steps.”

Arne Schönbohm

 

“The confidence of the users is important for the acceptance of new technologies. This is achieved, among other things, by transparent testing, evaluation and certification of AI systems. The basis for uniform standards and norms is the development of test procedures, which we are now tackling with our long-standing partner Fraunhofer IAIS. At the same time, with the NRW Ministry of Economic Affairs we have a reliable partner that creates and promotes good framework conditions for innovation.”

Prof. Dr. Stefan Wrobel

 

“Fraunhofer IAIS attaches great importance to the development of trustworthy AI solutions and has continuously expanded its research focus on AI protection over the past years. With the development of our test procedures in view of AI certification, we create reliable standards for the development and evaluation of AI systems. Already next year we will conduct first tests with companies. I am pleased that with BSI we have a strong partner with many years of experience in establishing standards in IT on our side.”

Flagships powered by KI.NRW

With the umbrella brand “Flagships powered by AI.NRW”, the Artificial Intelligence Competence Platform North Rhine-Westphalia supports projects sponsored by the state as AI lighthouse projects. The aim is to support efficient technology transfer and close cooperation between medium-sized companies, start-ups, universities, colleges and research institutes in NRW.

Essential contents of the cooperation agreement between Fraunhofer IAIS and BSI will be developed and implemented within the framework of the AI.NRW flagship project “Certified AI”, which is funded by the federal state of North Rhine-Westphalia. Under the strategic patronage of AI.NRW the competence platform accompanies funded projects communicatively and positions the AI location NRW by marketing the results on a European level. The focus is on the sustainable transfer and further utilization of the project results.

Certified AI

The experts at Fraunhofer IAIS have already laid important foundations for the development of an AI certification last year in an interdisciplinary research project with scientists from the fields of computer science, law and philosophy, who identified the central fields of action and formulated the first guidelines for the development of a test catalog for the certification of AI systems. The results were published in 2019 in the whitepaper “Trustworthy Use of Artificial Intelligence”.

Interdisciplinary fields of action for the development of an AI certification

As a basis for the certification of AI systems, seven fields of action were defined in interdisciplinary cooperation (see figure). The development of the technical test procedures starts here.

Ethics and Law

Does the AI application respect social values and laws?

Autonomy & Control

Is autonomous, effective usage of AI possible?

Fairness

Does the AI treat all persons concerned fairly?

Transparency

Are the AI functions and the decisions made by the AI comprehensible?

Reliability

Does the AI work reliably and is it robust?

Security

Is the AI protected against attacks, accidents, and errors?

Data Protection

Does the AI protect privacy and other sensitive information?

On the basis of the development of technical test procedures, the interdisciplinary discourse on the design of ethical and legal frameworks is also to be continued. The goal is to strengthen the trust and acceptance of companies, users and social actors in AI-based applications. The development of the testing catalog is based on the recommendations of the “Data Ethics Commission” of the German Federal Government and the “High Level Expert Group on AI” of the European Union and is to take into account technical quality criteria such as reliability and security as well as criteria of transparency and fairness.