Glossary: AI Keywords

There is a wide range of terms used in the context of AI.
To get a first overview, we have listed the most important and frequently used vocabulary related to this topic.

Adversarial Learning
Adversarial Learning

Adversarial Learning is an attempt to make a model more robust against attacks by learning with so-called adversarial examples. These examples are deliberately perturbed to induce false results.

Algorithm, learning algorithm
Algorithm, learning algorithm

In computer science, an algorithm is an exact computational specification for solving a task. A learning algorithm is an algorithm that receives sample data (learning data or training data) and computes a model for the seen data that can be applied to new sample data.

Artificial Intelligence (AI)
Artificial Intelligence (AI)

Artificial Intelligence is a branch of computer science that deals with the automation of intelligent behavior. It is neither defined what »intelligent« means, nor which technology is used. One of the foundations of modern artificial intelligence is Machine Learning. Other important methods are logical reasoning on symbolic knowledge, knowledge representation or planning methods. In expert circles, a distinction is made between Strong AI and Weak AI.

Artificial Neural Network (ANN)
Artificial Neural Network (ANN)

Artificial Neural Networks are machine learning models that are modeled on the brain’s natural neural networks. They consist of many layers of nodes realized in software, called artificial neurons. Using examples, a learning algorithm changes the weights, numerical values at the connections between nodes, until the results are good enough for the task. The number of nodes, layers, and their interconnections have a significant effect on the model’s ability to solve tasks.

Assistance Systems
Assistance Systems

Digital Assistance Systems optimize cooperation between humans and computers. They can be found in numerous fields of activity: from document management in the commercial sector, to voice assistants that answer questions or take instructions, to production and assembly, where artificial intelligence methods support humans depending on the context.

Autonomous Driving
Autonomous Driving

Autonomous Driving means that an autonomous system fully takes over the driver’s tasks – reliably under a wide range of conditions. Autonomous driving is increasingly based on artificial intelligence. In order to reliably master even complex situations in road traffic, safeguarding machine perception and the decisions derived from it is one of the challenges of autonomous driving.

Autonomous Systems
Autonomous Systems

Autonomous Systems are devices and software systems that act and react independently without human control and without pre-programmed sequences. They are to be distinguished from automated systems, which execute predefined sequences of actions but cannot change them independently. To respond situationally, autonomous devices must have sensors and software systems must observe digital data streams. The behavior is usually trained by machine learning and can be continuously improved.

Big Data
Big Data

Big Data refers to quantities of data that are too large, too complex, too fast-moving or too weakly structured to be managed and analyzed using conventional database systems.

Bot
Bot

A bot is a computer program that processes recurring tasks largely automatically or autonomously. Examples that could benefit from machine learning are chatbots, social bots or gamebots.

Certification
Certification

Current efforts to develop a test catalog for AI applications are aimed at enabling certification of AI applications. The standards set in this way should make it possible to assess the quality of AI applications in a differentiated manner, contribute to transparency in the market, and promote acceptance in the application.

Cognitive machines or cognitive systems
Cognitive machines or cognitive systems

Cognitive machines or systems are alternative terms for artificial intelligent systems or artificial intelligence. They are characterized by capabilities of learning and reasoning as well as language processing, image processing and interacting with the user.

Data Mining
Data Mining

Data Mining is the application of statistical and machine learning methods to detect patterns, trends or correlations in existing data bases.

Data Science
Data Science

Data Science is an interdisciplinary scientific field that deals with methods, processes and algorithms for extracting knowledge from structured and unstructured data. In the professional field of a Data Scientist, knowledge of mathematics, business administration, computer science and statistics is required. A Data Scientist identifies and analyzes available data resources, determines needs and develops concepts to use the data profitably.

Deep Learning (DL)
Deep Learning (DL)

Deep Learning is machine learning in artificial neural networks with several to numerous layers composed of a large number of artificial neurons. Deep Learning is responsible for the successes in speech, text, image and video processing.

Diffusion models
Diffusion models

Diffusion models can generate data similar to their training data. As generative AI models, they can generate images based on a text prompt. This is achieved by adding Gaussian noise to the training images and training the model to denoise the image again. The trained model can then generate an image from random noise that resembles its training images.

Discriminative AI
Discriminative AI

Discriminative AI models learn to distinguish and classify data. Unlike generative AI models, which generate new data, discriminative models classify input data into known categories, such as animal images into dog or cat images.

Distributed AI or »learning on the edge«
Distributed AI or »learning on the edge«

With machine learning in the cloud, the model is only in the cloud. To train and apply it, the end devices must send all the raw data to the server. With distributed AI, the models stay in the end devices. Instead of the raw data, the models are uploaded to the cloud, combined with each other there, and distributed again. In this way, each end device benefits from the training on all other end devices. The data protection-friendly concept of edge computing goes hand in hand with savings in computing times, communication effort and costs, and an increase in security against cyberattacks.

Explainable AI
Explainable AI

Black-box models, such as deep Artificial Neural Networks in particular, are incomprehensible to humans. Explainable AI searches for ways to make this hidden logic or individual outputs more comprehensible or explainable.

Foundation Models
Foundation Models

Foundation Models are large machine learning models that have been trained on the basis of a large amount of general data. After this pre-training, the models can be fine-tuned for a variety of specific tasks.

A well-known example of foundation models are large language models (LLMs), which have billions of parameters and can handle complex Natural Language Processing (NLP) tasks such as text classification, text generation, language translation, sentiment analysis and question-answer systems. In addition to their use in language models, there are also visual and multimodal foundation models that generate images from text, for example.

Generative AI
Generative AI

Generative AI models are used to generate new data that has similar statistical properties to a given data set. For example, they can generate text, images, audio, video, code, 3D models or simulations that follow the user’s instructions.

Hybrid AI
Hybrid AI

Hybrid AI combines data-based machine learning, knowledge representation, and logical reasoning. Knowledge and the respective reasoning are directly introduced into the learning process, for example to emulate the human ability of contextual understanding and to make the AI system more robust overall.

Internet of Things (IoT)
Internet of Things (IoT)

The Internet of Things refers to the cross-linking of physical products, machines, vehicles, etc., which means that they can exchange data with each other or provide data via the Internet. The information collected can be used to automate processes or build autonomous systems.

Knowledge representation
Knowledge representation

In order to represent knowledge formally, different methods of knowledge representation are applied, e.g. ontologies, classes or semantic networks or rule systems. The expert systems of the 80s consisted of such knowledge bases. Today, rule systems are popularly used for programming chatbots.

Large Language Models (LLMs)
Large Language Models (LLMs)

Large Language Models are foundation models that have been trained on large amounts of text data to process natural language. The models learn to continue text by establishing statistical relationships between words, building up knowledge about the syntax, semantics and ontology of the language.

After this pre-training, the models can be fine-tuned for their specific use, e.g. as a chatbot. Their transformer architecture allows for efficient processing of large amounts of data and the consideration of remote dependencies in data.

Machine Learning (ML)
Machine Learning (ML)

Machine Learning aims to generate knowledge from experience by having learning algorithms develop a complex model from examples. The model can then be applied to new, potentially unknown data of the same type. Thus, machine learning does not require manual knowledge input or explicit programming of a solution path.

Model
Model

A model is an abstraction of the reality. In machine learning, a learning algorithm creates a model, which generalizes the input data. The model can then also be applied to new data.

Multimodal AI
Multimodal AI

While unimodal AI systems can only process or generate one type of data, multimodal AI can handle different types of data, such as text, images and audio. Multimodal models are therefore more flexible because they are trained on different types of data.

Natural Language Processing (NLP) or Machine Language Processing
Natural Language Processing (NLP) or Machine Language Processing

Natural Language Processing includes techniques for recognizing, interpreting, and generating natural speech, both spoken and written. This includes textualization of spoken language, sentiment recognition, information extraction from text, machine translation, and conversation.

Predictive Maintenance (PM)
Predictive Maintenance (PM)

Predictive Maintenance aims to detect a malfunction before it occurs. Data collected by sensors is evaluated in real time to detect early abrasion or faults in the production chain, maintain the relevant production equipment in good time and ultimately avoid downtime or faulty production. Machine learning is a successful method for predicting failures.

Quantum Computers
Quantum Computers

Quantum Computers base their elementary computational steps on quantum mechanical states – called qubits – instead of the binary states (bits) in digital computers. Qubits are processed using quantum mechanical principles, which is expected to provide a huge speed advantage for some applications. Potentials related to machine learning and thus quantum AI may be predicted through the new computer architectures.

Realtime
Realtime

Realtime means the constant operational readiness of a system and the ability to perform all reactions and computing steps in a certain short period of time.

Reinforcement Learning
Reinforcement Learning

In Reinforcement Learning, the learning algorithm receives occasional feedback for interactions with the environment and learns to better assess the likelihood of success of each action in different situations. Reinforcement learning is popular for autonomous systems and games.

Robots
Robots

Robots are machines or devices that aim to take over certain physical and communicative tasks from humans. Typical examples are service and industrial robots. The autonomy of robotic systems increases to the extent that they can independently solve complex tasks through machine learning. Fully autonomous vehicles are an example of this.

Strong AI or General Artificial Intelligence
Strong AI or General Artificial Intelligence

Strong AI stands for the vision of using AI techniques to emulate human intelligence to its full extent and outside of individual, narrowly defined fields of action. Strong AI can so far only be found in science fiction. Since Artificial Intelligence emerged in the 1950s, there have been predictions that strong AI would become feasible in a few decades.

Supervised Learning
Supervised Learning

In Supervised Learning, the training data consists of examples with input and output.  The model is supposed to learn a function to also predict new examples. To determine the quality of the model, one trains it with only some of the available data and tests the final model with the remaining ones.

Transformer
Transformer

A Transformer is a deep learning architecture that uses an attention mechanism to map relationships between words. Due to the efficient processing of large amounts of data and the consideration of remote dependencies in data, transformer models are used in machine language processing for understanding, translating or generating text, as well as in image processing. Transformers are best known for their use in large language models.

Trustworthy AI
Trustworthy AI

Only Trustworthy AI applications guarantee IT security, control, legal certainty, accountability and transparency. For this reason, guidelines for the ethical design of artificial intelligence are being developed within companies and at the societal and political level. This focuses, for example, on the dimensions of ethics and law, fairness, autonomy and control, transparency, reliability, security and privacy.

Turing Test
Turing Test

The Turing Test was designed by British mathematician Alan Turing to assess the intelligence of artificial systems. If a human communicating simultaneously with an artificial system and a human interlocutor cannot ultimately determine which interlocutor is the human, the system is considered intelligent. Nowadays, such systems are called chatbots.

Weak AI
Weak AI

Weak AI uses AI methods to solve narrowly defined tasks. While it can already surpass human capabilities in individual areas, such as image analysis, Weak AI falls far short of reaching the same level for broader context tasks or tasks that require world knowledge. All current AI solutions are examples of Weak AI.