News Entry

“Developing trustworthy AI applications with foundation models”

26.01.2024

The KI.NRW flagship project “CERTIFIED AI” explains how to systematically deal with the specific risks of foundation models

Foundation models in the field of text, speech and image processing have great potential for the economy and society. But how can AI applications be made innovative, safe and trustworthy with the help of such models?

Against the backdrop of the European AI Act, which requires an AI conformity assessment of high-risk systems, the authors of the white paper “Developing trustworthy AI applications with foundation models” show how the trustworthiness of an AI application developed with foundation models can be assessed and ensured. Special consideration is given to the fact that specific risks of such models can affect the respective application and must therefore also be taken into account when checking trustworthiness.

The white paper also provides an introduction to the technical construction of foundation models and how AI applications can be developed on this basis. In addition to an overview of the resulting risks, it also shows what requirements for AI applications and foundation models can be expected from the European Union’s forthcoming AI regulation. Finally, the system and procedure for fulfilling trustworthiness requirements are presented.

The white paper was published as part of the KI.NRW flagship project “CERTIFIED AI”. Here, the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS is working with the German Federal Office for Information Security (BSI) and the German Institute for Standardisation (DIN) as well as other research partners to develop test procedures for the certification of artificial intelligence systems.