Skip to content Skip to sidebar Skip to footer

Source: Neue Zürcher Zeitung as of 30-05-2020

Machines that behave like humans in everyday situations that is the goal of computer research. Common sense can sometimes be quite unhealthy. The recent history of artificially intelligent (AI) systems is undoubtedly impressive. With neural networks and machine learning, research has taken a decisive step towards more environmentally adapted programming. There are speech, facial, pattern recognition systems of amazing adaptability, and the expectations and visions of the AI community are accordingly high. Now, on the horizon, the so-called general artificial intelligence (AGI) begins to shine, a machine with a “healthy artificial mind”. The first, so to speak, infantile forms are currently being tried out in self-driving cars. The artifacts begin to learn to adapt to situations, so they become adaptive like the natural creatures. And this is not just a problem with technical problems, but also with philosophical ones. One is: What does it actually mean to react with Common Sense in everyday situations? Descartes’ reservations The question was already a matter for Descartes. In a famous section of the “Discours”, he speaks of the universality of reason, which knows how to assert itself in all situations of life. If machines “did many things just as well, or perhaps better than one of us,” they would be “inevitably absent from some others and thus show (. . .) that they do not act according to insight, but only to the disposition of their organs. For while reason is a universal instrument that serves in all sorts of cases, these organs must have a special disposition for every particular act, and therefore it is morally (practically, ed.) that in a machine different organs are enough to allow them to act in all cases of life in a way that our reason enables us to act.” These are strikingly modern words, and they aim at the core of today’s problem of learning machines. If we use “neuronal network” for “organ” and “disposition” “learning algorithm”, Descartes’ text reads as a reservation against an artificial common sense. Learning machines will never act out of insight, because their construction principle does not allow universal reason. So far, computers have been idiots savants.
Computer engineers would counter that they did not need many organs, but a potent algorithm plus an immense, possibly pre-structured, amount of data that he could plough through. Deep learning actually works according to surprisingly simple principles, which is why the long-term goal of the artificial common sense is “in principle” achievable. The emphasis is on “remote”. Until now, the novel artificial systems have been excellular in games, i.e. in clearly defined frames with predetermined rules and a primary goal: to win. But a self-driving car can’t just win. Its functioning depends on numerous contingencies from the timely delivery of passengers to the right destination, to the follow-up to the traffic rules, the consideration of weather conditions and road conditions, to uncertainties such as unauthorised road crossings by pedestrians, non-functioning traffic lights, traffic jams or accidents. Unhealthy machine understanding For example, a self-driving car has registered countless red signals during its training and stored something like a concept of red in its neural network. Under normal conditions, this works quite well, but time and again abnormal situations can be expected. And as it turns out, very small disturbances of the learned pattern are often sufficient to lead the algorithm to a total and possibly fatal misclassification. It is precisely this openness of the real situation that has so far been the major obstacle on the road to artificial intelligence with Common Sense. This may be another example. Youtube developed an algorithm with the aeration to maximize the time the user spends on the video portal. The algorithm did this by recommending videos with increasingly extreme content, according to the principle of “upping-the-ante”: Increase the use. One user recounts watching a few videos about Donald Trump’s campaign, and then being inundated with racist, conspiracy theory and other offensive material. The algorithm thus “interprets” its task in a highly idiosyncratic, even stubborn way, which leads to unintended effects such as radicalization and polarization. Hardly a sign of “healthy” machine understanding. The designers are looking for a new approach. It comes from the computer scientist Stuart Russell and calls himself “humancompatible machines”. Such machines start from scratch, so to speak. Instead of encoding and maximizing a predetermined goal, they themselves learn to decode such a goal from human behavior and then improve behavior. This is called inverse amplification. This is linked to the expectation that the orientation to human behavior will allow the machine to act more compatible i.e. with more common sense.

remains. The first is whether humans are suitable as role models for an AI system. He is not, in fact, a logical being. His behavior is fed by a dense implicit web of expectations, preferences, opinions, motives that can hardly ever be completely unbundled into an explicit formalism. Secondly, our preferences and desires are constantly changing, and they are often not guided by rationally reconstructable reasons, but by irrational moods and whims, which are often vague or even contradictory. And thirdly, what if man is guided in the deepest by bad reasons? Should the machines then learn to optimize this wickedness? Experiences such as those of Youtube and other disgraceful algorithms feed a not exactly optimistic vision of the future. Back to the original question Common computer understanding actually throws us back to the original question: What does it mean to behave like a human being, yes, to be a human being? For example, we do not learn in the same way as AI systems. We do not need to see 10 000 cat pictures in order to form a reliable category of ‘cat’. Rather, we develop expectations of how things might go, and on that basis we make predictions. In our perception, we naturally conclude on hidden parts of a thing without having the appropriate data about it. Or we develop an intuition for the difference between correlation and causality. The rain is not the reason why people put the umbrella on; their desire to stay dry, on the other hand. It is such cognitive aspects that contribute significantly to our common sense, so to speak, our embodied spirituality. This is not beginning to dawn on a few AI researchers. No less than Rodney Brooks of the Massachusetts Institute of Technology, a luminary in this field, recently questioned a core assumption of the entire AI project: Artificial systems may reach a limit of complexity because they are made of the wrong substance. That means that the fact that robots aren’t made of meat could make a bigger difference to humans than he, Brooks, has previously assumed. The riddle of the human spirit lies in its incarnation.

And that is why AI research will have to focus more on this specific substance that we are made of. She will have to think more biologically. The robotics are already starting to experiment with animal cells that develop according to a program xenoboter. Let us be careful to prematurely imagine a future scenario with smart organoid devices. Let us look at the real problem. Artificial intelligence remains deeply alien to us. We basically create our own aliens with it. And these aliens, despite all their efforts, are unlikely to conform to our daily lives. Rather, we adapt our everyday life to them. So the problem is not super-smart machines, but sub-smart people. The question arises as to whether man is suitable as a model of an artificial intelligence system. He is not, in fact, a logical being.
Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.