Skip to content Skip to sidebar Skip to footer

Source: Les Echos as of 09-03-2020

AI will only be successful if it inspires confidence in consumers and citizens. Making decisions explicable and transparent is a challenge that experts in psychology and pedagogy are trying to meet

By Jacques Henno

On 5 February, the Hague court banned the Dutch government from using SyRI, a software that detects welfare fraud. The Batavian administration had refused to disclose the computer code of this automatic system which crossed different data (tax returns, amounts of aid collected …) and targeted the poorest populations. “We hope that this precedent will encourage states to publish the code of the algorithms they use,”  said Amos Toh, who followed the trial for HRW (Human Rights Watch), an American NGO where he is a researcher, in charge of AI and human rights. This would allow third-party organizations to verify the proper functioning of these programs and for citizens to understand how decisions about them were motivated. »

“Black Boxes”

Automatic decision-making systems, whether based on logical models (expert systems, etc.) or statistics (machine learning, deep learning, etc.), are beginning to have an impact on the lives of millions of citizens, consumers or employees: tracking down tax evasion, recruitment, facial recognition, selection for higher education, monitoring the productivity of workers or handlers, granting bank loans, chatbots…

Can we really trust artificial intelligence?

With each breakthrough, experts point out that these software are just “black boxes”. “Too much energy and time is spent trying to understand how certain algorithms work and make their decisions,” sighs Marie David, a graduate of the X and Ensae (National School of Statistics and Economic Administration), who led data and AI teams in the insurance bank, and co-wrote “Artificial Intelligence, the New Barbarity” (Editions du Rocher).

The stakes of this algorithmic transparency are legal, economic and societal. Legal? “European laws, such as the RGPD, and French laws impose a form of transparency on the logic behind automated processing of personal data,”  says Grégory Flandin, Artificial Intelligence Program Director for Critical Systems at IRT Saint-Exupéry, a Toulouse-based technology research institute. “The  RGPD has a strong impact,”confirms Jean-Philippe Desbiolles, IBM’s global AI and data vice president. Internal audits of the lack of bias, transparency and explainability of AI have been on the rise for our clients in recent months. The issue of trust in these systems is paramount to its adoption”  All AI platform publishers offer their users tools detailing – somewhat (see below) – how their software works. And the big audit and consulting firms are hiring specialists. “For  the past seven months, I have been head of AI ethics for EY,” says Ansgar Koene, a computer science researcher at the University of Nottingham in England and author of a guide on algorithmic transparency for the European Commission in April 2019.

Ethical problems

The economic stakes? “George Akerlof, the 2001 Nobel Prize in Economics, has shown that if a market becomes opaque to some of its players, it risks collapsing,” recallsPatrick Waelbroeck, professor of economics at Telecom ParisTech, co-founder of the Values and Policy Chair of Personal Information. This is what could happen to AI if it is not given more transparency.  »

Why artificial intelligence can be at risk

Societal issues? “The risk is that consumers or citizens will not only reject decisions they don’t understand, but also reject companies or states that use such software,”  says Anne Bouverot, president of Technicolor and the Abeona Foundation (“data science for equity and equality”). “All  the ethical issues raised by AI have already been addressed,”saysTeemu Roos, who teaches AI and data at the University of Helsinki (Finland). Except, algorithmic transparency and accountability. That is why we need to make a particular effort to educate these two aspects. Initiatives are multiplying. They are twofold: to try to train as many citizens as possible on the major issues of these algorithms; and find solutions to simply explain the AI decision-making process to a non-specialist.

Impact on privacy

From the beginning of April, all French people will be able to learn AI, thanks to two Moocs. One, already available in six European languages, comes from Helsinki: “Launched in May 2018, Elements of AI was followed by 362,000 people, only a third of whom were Finnish,”says Megan Schaible, Chief Operating Officer at Reaktor, a Helsinki-based technology consultancy that designed the Mooc with Teemu Roos. The other course, Objective AI, is the result of a collaboration between a think tank of the Montaigne Institute, the online pedagogy specialist OpenClassrooms and the Abeona Foundation.

“Transparency by Design”, psychology of explanation, impact on privacy… initiatives are multiplying  on how best to explain to the general public the result of software. “We  want transparency towards candidates to be integrated into our next artificial intelligence platforms from the moment they are designed,”  says Steve O’Brien, head of operations at Job.com, a U.S. online recruitment site that is reorganizing around AI. “What  is a good explanation? How to evaluate its usefulness? All of these psychological issues are also part of our program,”says Matt Turek, head of the XAI (Explainable Artificial Intelligence) project at Darpa, the research agency within the U.S. Department of Defense. Many academics are working on methods to show the relative weight of a particular parameter in the decision-making of an AI (Quantitative Input Influence) system. Yair Zick, who teaches computer science at the National University of Singapore, is one of them. It does, however, draw attention to the privacy impact of these explanatory systems. “The  more information you give about an algorithm, the more you give other users the opportunity to apply it to other people and learn more about them! He warns. Transparency also has its downsides.

Jacques Henno

@jhennoparis

Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.