AI-Myths

“Artificial intelligence” is a dazzling term that is surrounded by many myths. We want to dispel some of these myths. By the way: the answers are taken from the video “Artificial Intelligence – True or False? What’s the truth about AI myths?” by the Fraunhofer IUK-Verbund Technologie and were translated using AI from Fraunhofer.

Misconception No. 1

“AI is destroying jobs because cognitive systems can do many jobs more accurately, faster and with fewer errors.”

Answer: “It is indeed the case today that cognitive systems are very capable of taking over tasks for which a human being must have studied. As a result, cognitive systems are being used more and more. They are being used, for example, to plan complicated logistics chains and delivery systems. These are difficult tasks that still pose a major challenge for human experts. But cognitive systems are very good at this and can do it robustly, accurately and very precisely. From that perspective, we can expect to see upheavals in the job market in the near future. Many job profiles will be supplemented and assisted by these cognitive systems and may also disappear. At the same time, however, we will then need people on the labor market who are able to operate and create cognitive systems and optimize them for the tasks they are supposed to solve.” Prof. Christian Bauckhage

Misconception No. 4

Decisions made by AI systems are more neutral and objective than those made by humans.

Answer: “I would not say that. First of all, AI has no intrinsic motivation, no self-interest to be neutral or objective. It will always depend on the training material and also on the intention of the trainer, “how objective” one evaluates the decisions of the machine in the end. In the end, the machine is trained to process an input and deliver an output. An example taken from the fields of medicine: If I train the machine to detect malignant changes in the liver in CT scans, it will not be able to find malignant changes in the spleen, kidney or lung. Rather, it will make completely non-objective decisions in the process. At the end of the day, the machine is just as objective as you would find the training data objective. Especially in medicine one usually wants to include factors that cannot be objectified, such as the patient’s will. That’s where the direct comparison ends. We will need people to be responsible for the machine-based decisions.” Dr. Markus Wenzel

Misconception No. 7

If we are not careful, AIs will have their own goals, consciousness and dreams in the foreseeable future.

Answer: “This myth speaks to philosophical questions and expectations that no human can fulfill. After all, AI systems are not really ‘intelligent’ in any robust sense – not in the way a human can be intelligent. In fact, I think this myth is dangerous, because it raises expectations – among politicians, among funders – that scientists cannot satisfy. And when disappointment then inevitably occurs, all good scientific results are also devalued in this environment. Science has always been there to fight myths – it should definitely fight this myth.” Dr. Wolfgang Koch

Misconception No. 8

AI creates artificial brains.

Answer: “Well, as long as the term ‘artificial intelligence’ exists, it has been discussed. But it’s not like we’re building artificial brains or artificial humans. Just like an airplane engineer isn’t trying to build an artificial bird. He just wants to design something that flies. And we want to build machines that can do elementary cognitive tasks that actually require intelligence. Once you understand exactly how this works technically, it doesn’t seem very intelligent at all. Nevertheless, machines or mechanisms that perform elementary cognitive tasks have already reached our everyday lives. There are already devices that can control cars. Our cell phones can understand our speech or even translate it simultaneously. But even if you teach a machine individual skills so well that it even becomes better than humans – as in lip reading, for example – you still won’t end up calling it an intelligent device.” Dr. Hans Meine

Misconception No. 2

AI is beyond human control and, in the worst cases, can act against the will of its developers.”

Answer: “I would consider this myth to be false at this stage. It is true that AI systems can come up with solutions to their tasks that were not foreseeable in this way. They can surprise their developers by providing new answers that no one had thought of before. However, it is not the case that AI systems exhibit their own creativity or that they have their own goals. We really can’t see AI evolving beyond our control. That’s science fiction – if at all, it’s going to take a very, very long time.” Prof. Christian Bauckhage

Irrtum Nr. 5

Even the creators of AI algorithms are often no longer clear why exactly the machine chose which solution path, which decision.

Answer: “Until a few years ago, I would have fully agreed with this statement. Because it was really the case that complex AI systems, such as neural networks, were not comprehensible – you couldn’t watch these systems find solutions. They were used as black boxes and you trusted that they would already do the right thing. Unfortunately, that is not always the case. In our research, we found that an AI system – which in this case reliably classified horse images – didn’t look at the horse in the image at all, but at a copyright tag that many horse images were tagged with. In practice, of course, you don’t want something like that. In critical applications, like autonomous driving or medicine, you want to make sure not only that the result is correct, but also that the solution path makes sense and is correct in the case. Three years ago, with colleagues from TU Berlin, we developed a general technique that allows us to see on what basis the AI system made the decision. And this is a first step towards so-called ‘transparent AIs’ and also debunks this myth a bit.” Dr. Wojciech Samek

Misconception No. 9

Autonomously controlled vehicles make road traffic safer because the ‘Human factor of insecurity’ is eliminated.

Answer: “There’s something to that, but you have to look at it in a more differentiated way. All living beings link their sensory impressions with what they have already learned and what others tell them. From this, they form a mental model of their environment and can act appropriately to the situation. When drivers are supported by an AI-based system, this system automates the gathering and processing of information. As a result, humans become ‘smarter’ and can act more situationally appropriate in road traffic. Of course, any form of automation can also pose a danger. One must of course investigate and know the dangers of the automation tool AI.” Dr. Wolfgang Koch

Misconception No. 3

With the help of artificial intelligence, machines will be smarter than humans in the foreseeable future.”

Answer: “The question of whether machines will be more intelligent than humans is actually impossible to answer. We should be talking about artificial cognition rather than artificial intelligence. We now see that machines are very good at analyzing images and speech signals, reading texts, and also making plans. For example, there is a system that beat the world champion in the game of Go. In this respect, machines have really become very good. For such specialized tasks, it’s quite conceivable that they will be better than human experts in the very near future.” Prof. Christian Bauckhage

Misconception No. 6

AI algorithms are so complex that you need entire data centers to calculate them. Thus, they are not usable for the average consumer in everyday life.

Answer: “This myth is not correct. Even though you need powerful computers and especially graphics cards to train these models, today graphics cards are available for little money, so that they are also affordable for private users and thus everyone can train neural networks. More importantly, in recent years there has been increased research aimed at simplifying and compressing these complex models, which have millions of parameters and require gigabytes of memory. This way, the algorithms are not only available to the home user, but also executable on a smartphone or in the entire IoT domain. This research has shown very good results. I am very hopeful that these techniques will become even more widespread in the coming years.” Dr. Wojciech Samek

Misconception No. 10

“AI systems can be influenced: An artificial intelligence will always make its decisions in the sense of its programmer or client.”

Answer: “I would not generalize that. The reason why AI systems are used is that data exists that describes a problem. One wants the AI to solve this problem in the data and to find the truth in the data. A possible manipulability of the system would primarily take place via the manipulation of the data. As an example of this: in 2016 Microsoft unleashed the chatbot Tay on Twitter with the goal of learning to communicate – with users and by users. After a while, however, some users figured out that the algorithm was capable of learning and how it learned. These users then wrote to the bot with racist and antisemitic content. The bot picked up on the communication behavior and after some time started posting racist or anti-semitic content itself. After 16 hours of operation, Microsoft took the bot offline again to prevent further damage. This is an example of how an AI doesn’t always act in the creator’s intent.” Sebastian Lapushkin