To evaluate their sensor data, autonomous cars require powerful and learning computer systems directly on board. For this purpose, artificial intelligence must be embedded compactly.
Traffic is full of complex scenarios that require computer systems to decrypt self-driving cars. Machine learning algorithms on board help with this.
The world’s data centers handle millions of user requests every second. Artificial intelligence algorithms and powerful hardware help, for example, to recognize faces on mobile phone photos and to understand the voice input of SmartHome systems. However, for a number of applications, it is not possible to perform the calculation on central servers or in the cloud. If there is not a sufficient data connection at the user’s location or if results need to be available particularly quickly, the computing capacity is required on the spot. A prime example of an application with these requirements is assistance systems in vehicles, which are ultimately intended to eliminate the need for human control. In many cases, this is about very fast image and object recognition: The individual traffic situation including road conditions, vehicles, pedestrians, traffic lights and traffic signs must usually be able to be interpreted in real time on the basis of optical sensors. “These systems in vehicles must be able to respond within milliseconds. Outsourcing your calculations to a server is completely impossible,” explains Axel Jantsch from the Institute of Computer Technology at the Vienna University of Technology. Artificial intelligence must therefore do its work on board the vehicles – despite all the limitations that exist here in terms of computing power, available energy and space.
Jantsch is working on approaches to make such applications more powerful. In the new Christian Doppler laboratory for “Embedded Machine Learning“, which he will lead, the development of small computer systems “embedded” in vehicles or machines will be optimised. Ultimately, the aim is to harness as much computing power as possible for machine learning applications in a small space. The CD laboratory is supported by the Ministry of Economic Affairs. The industrial partners are Mission Embedded, Siemens Austria and AVL List. TU Graz is another scientific partner. Little space and energy “We typically have only one to two cubic meters of space and an energy budget of a few watts or milliwatts available for a system,” Jantsch describes the limited resources to be taken into account in the development of the embedded systems. At the same time, however, the systems are expected to bring new features. For example, a “learning capability” of the systems is to be ensured during their use. Until now, machine learning algorithms have been trained using historical data in order to be able to apply them to new cases in use. “In the future, data from ongoing operations could also be collected, which will then be used for a further training phase in the cloud. The improved algorithm is then played out to the local system as an update,” says the computer scientist. Basically, there are three platforms that can be used to design the systems. On the one hand, these are so-called GPUs, i.e. graphics chips that are well suited for deep learning applications. On the other hand, there is the possibility to trim the hardware even more specifically to a specific application using FPGAs – these are “programmable” computing cores, whose logic circuits can be designed by developers themselves. “In this case, the design process is more complex, but you can cover a requirement area that cannot be achieved with the graphics chips,” explains Jantsch. Finally, the systems can still be implemented as so-called ASICs – application-specific, integrated circuits that are manufactured industrially. Their advantage lies in even higher efficiency. The disadvantage: Economically, their production can only be at high quantities. For Jantsch and colleagues, it is now primarily a matter of developing methods to configure the hardware and software of the embedded systems in the best possible way according to their application. This work should be more easily reproducible for other users. Jantsch: “The derived methodologies are cast into software tools that will make the development of embedded systems easier in the future.”