Dr. Maximilian Poretschkin
Dr. Maximilian Poretschkin
Senior Data Scientist & Projektleiter KI-Zertifizierung

Certification to ensure a trustworthy AI

An initiative of the competence platform KI.NRW

»The primary goal of a certification to ensure a trustworthy AI is the elaboration of concrete quality and security standards, on the basis of which AI applications can be expertly and neutrally evaluated. In addition, it is important to contribute to the social dialogue on AI in order to build trust. To this end, we have published a white paper with fields of action for a trustworthy use of AI, which forms the basis of our testing catalogue for AI certification.«

Prof. Dr. Andreas Pinkwart
Prof. Dr. Andreas Pinkwart
Minister für Wirtschaft, Innovation, Digitalisierung und Energie des Landes Nordrhein-Westfalen

Certification to ensure a trustworthy AI

An initiative of the competence platform KI.NRW

»Thanks to KI.NRW, North Rhine-Westphalia is further expanding its leading role in applied AI. The certification of AI applications ensures the ethical and responsible use of this future technology, in which we put people at the centre of our attention. At the same time, certification promotes free competition between different providers.«

Social impulses through interdisciplinary experts

In order to sustainably increase trust in AI, an interdisciplinary research team of the Universities of Bonn and Cologne and Fraunhofer IAIS in cooperation with the Federal Office for Information Security (BSI) has developed a certification of AI systems, which tests not only the technical reliability but also a responsible use of the technology. This makes the state of North Rhine-Westphalia a pioneer in the development of AI certification based on expert and neutral testing. The aim is to strengthen the trust and acceptance of companies, users and social actors in AI-based applications. The underlying fields of action from a philosophical, ethical, legal and technological perspective were published in a white paper as a contribution to the social debate on the trustworthy use of AI and the further development of certification.

Whitepaper: Trustworthy use of artificial intelligence

The team of the Universities of Bonn and Cologne and Fraunhofer IAIS presents its interdisciplinary approach in a white paper for the certification of AI applications.

Download Whitepaper english version*
Download Whitepaper deutsche Version*

*Link to the institute website of Fraunhofer IAIS

Bonn Catalogue

Seven fields of action for a reliable and trustworthy AI

Declared aim is to develop a catalogue of tests that builds on existing guidelines such as the recommendations of the Federal Government’s “Data Ethics Commission” or the European Union’s »High Level Expert Group on AI«. The creation of the catalogue takes into account technical quality criteria such as reliability and safety as well as ethical and moral criteria of transparency and fairness.

Ethics and law

Does the AI application respect social values and laws?

Autonomy and control

Is a self-determined, effective use of the AI possible?

Fairness

Are the functions and decisions of the AI comprehensible?

Transparency

Are the functions and decisions of the AI comprehensible?

Reliability

Does the AI function reliably?

Security

Is the AI secure against attacks, accidents and errors?

Data protection

Does the AI protect privacy and other sensitive information?

Further information on AI certification, development of a test catalogue and AI protection can be found on the pages of the Fraunhofer Institute IAIS.

In cooperation with

Prof. Dr. Dr. Frauke Rostalski
Chair for criminal law, criminal procedure law, legal philosophy and comparative law at the University of Cologne
Prof. Dr. Markus Gabriel
Director of the Center for Science and Thought, University of Bonn

Funded by