They offer a toolbox and solutions for this purpose
Stanford, Oxford, Google, Berkeley, Intel, the École Normale Supérieure de Paris, just under sixty researchers from institutions, companies, The United States and Europe specialized around AI published on April 15 a report advocating a more reliable and ethical technology. In their proposals, the establishment en of a “bias bounty”, like the bug bounty in computer security.
Suspicion of ethical washing
The publication of the 59 researchers comes from the observation that the range of AI and applications of this technology is expanding every day and is gradually integrating into everyday life.. This proliferation is still unregulated and according to them framed by too few standards.
The question of ethics arose early in the development of AI, with many researchers bottled up to science fiction. According to the collective, the measures taken are insufficient and dispersed, each company, university, applies its own ethical standards which could lead to suspicions of “ethic washing”. As for green washing (the original expression) the authors speak of ethical measures of , little respected in reality.
The text is the result of an exchange a year ago, in San Francisco between 35 academics, members of the industry and associations.. It is now perfectly informed, artificial brains are not immune to prejudice.. Discriminating facial recognition, deepfake, job destruction, invasion of privacy, freedom.. The arrival of en increasingly sophisticated AI raises new ethical questions.
The report en explains,”With the rapid technical advances in artificial intelligence and the spread of AI-based applications l’IA in recent years,there is a growing question about how to ensure that the development and deployment of AI is beneficial – and not harmful – to humanity.”“.
More transparency on AI technologies
en In order for the public to have confidence in AI,so that technology is beneficial to society, the article reviews different processes. The creation of a registry where would be recorded all bugs related to AI, standards body, well-known computer strategies as red teams for cyber security issues, bias bounty… Each suggestion is embellished with its flaws and qualities, the article is written as a toolbox open to all.
The authors mainly call on companies to be more visible about the technologies developed, and greater grande transparency. On the public research side, they note in particular the weakness of investments in research. The call between researchers of both private and public AI, often in prestigious l’IA institutions, might be able to move a few lines.