Skip to content Skip to sidebar Skip to footer

Source: France Soir as of 29-07-2020

It’s hard to express yourself on social media, when you know you can be attacked or harassed by anyone. To make users less afraid of gratuitous insults, a young man from Nice created the Bodyguard app a few years ago to help eliminate aggressive and hateful comments.

Artificial intelligence capable of spotting hateful content
In a proven cyber-harassment situation, or simply when there is a fear of exposing oneself publicly on the internet, the Bodyguard app can be very useful because it removes the toxicity of conversations. To achieve this feat, his algorithm was trained to recognize the different types of hateful comments: insults, threats, mockery, homophobia, sexual or moral harassment. Even with misspellings, abbreviations or asterisks that insinuate the word without writing it can be detected by artificial intelligence.
In practical terms, the user must give Bodyguard permission to log into their social media accounts, in order to manage comments under Youtube videos for example, or to block insulting users on Twitter. It is possible to address several levels of “tolerance” and to specify, for example, whether you want to see any racist, homophobic or sexual harassment messages. The algorithm can therefore let jokes or mockery pass, but censor clearly hateful content.

Bodyguard respects freedom of expression

The app can hide comments in a timeline, block users, but it can’t delete another user’s account, and doesn’t want to, because it would cause freedom of expression issues. Managing the accounts of users who make incorrect use of their freedom of expression, according to the creator of Bodyguard, Charles Cohen, is the total responsibility of the platforms themselves.
The app is currently compatible with Twitch, Facebook, Youtube, and Twitter, but platforms need to be open to developers to manage comments. This is not the case for Snapchat or TikTok, for example. In the future, a family app could be created to send notifications to parents if hateful comments are detected in their children’s timelines.

Social networks have a hard time managing hateful content themselves

For Charles Cohen, social networks have a hard time managing their own content because of the technology they use for it. While they use machine learning, Bodyguard has a more artisanal operation. Its filter analyzes potentially aggressive words or word groups, but also the context around the message, as well as the profiles and relationship between the author and the recipient of the comment. He then decides to delete it, hide, block or not. Manually training the algorithm took about 2 years, the time it took to study and “tagger” precisely as many configurations as possible.
This lack of manual and detailed work to moderate comments could cost social networks, which are going through a very difficult phase, at a time of demands against harassment and racism. Brands (Coca-Cola, Disney or Unilever), to show their disagreement with the lack of action on the part of digital platforms to block hateful content, have boycotted some platforms, withdrawing their advertisements. This has significantly weakened Twitter, with a loss of revenue of 19%, prompting the company to consider a paid service.
In France, the regulation of online hatred, especially for minors, is at the heart of the observatory’s missions launched on 7 July 2020.
Google, Facebook, Twitter, Twitch, Snapchat and TikTok will participate in the observatory to analyze and quantify the phenomenon of online hatred, improve understanding of springs and dynamics, and promote information sharing and feedback among stakeholders.

Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.