Benoît Georges @bengeorges
In China, the first weeks of the epidemic showed that simple protective masks made systems incapable of recognizing citizens. But that barrier is falling.
Imagine an app that would let you know any person’s name, activities and relationships from a single photo of their face. This would forever mark the end of anonymity, and with it the end of much of our privacy: identifying a stranger would become as simple as typing a name into Google, without the person concerned even realizing it.
Bad news, this application, worthy of an episode of “Black Mirror”, does exist. It was created by a New York start-up, Clearview AI, which has driven its artificial intelligence systems from billions of portraits found on the Internet, mainly on public parts of social networks. According to the Wall Street Journal, its technology is already used by more than 600 clients in the United States, law enforcement (local police units and federal agencies, such as the FBI) but also private security companies.
While calls to limit or regulate the use of facial recognition are increasing, in the United States as in Europe, the case of Clearview reminds us that this technology is spreading at high speed, with or without the assent of the States. But can technology be able to escape this? In China, where facial recognition is an overt part of the national surveillance apparatus, the first weeks of the Covid-19 epidemic showed that simple protective masks made systems incapable of recognizing citizens . But that barrier is falling: in early March, a Chinese company, Hanwang, said it had perfected its algorithms. “For mask wearers, the recognition rate is 95%, which ensures that the majority of people can be identified,” Hanwang Vice President Huang Lei told Reuters.
Identify or authenticate?
At this point, it is important to distinguish between the two main uses of facial recognition: identification (knowing a person’s name) and authentication (knowing that a person is who they claim to be). In the case of Clearview, it is identification; When you unlock your iPhone or pay with your smartphone by showing your face, it’s authentication. In the first case, one may wish to deceive the machines. In the second, it is in the interest that they cannot be deceived. For Françoise Soulié-Fogelman, artificial intelligence researcher and scientific advisor at Hub France IA, “everyone is panicking about being recognized, but what really worries me is when the machine doesn’t recognize me, and especially when she recognizes someone who pretends to be me.”
Today, deceiving identification systems is possible, even if the technology is progressing. Make-up, hairpieces, masks sold commercially can suffice (see opposite). An American artist, Adam Harvey, has even been developing cyberpunk-minded “looks” since 2010, designed specifically to blur artificial vision algorithms – large dyed hair strands, facial makeup features, stickers placed on the cheekbones… The risk is that the machine doesn’t recognize you, but everyone will notice you!
Authentication systems are also not foolproof. “As always in cybersecurity, the companies that develop the systems and those that attack them engage in the cat-and-mouse game,” says Raphael de Cormis, Thales’ vice president of innovation and digital. The first authentication systems could be deceived by simple pictures of faces, which is no longer possible today. On the other hand, high-definition masks printed in 3D (see opposite) can fool relatively new systems, whether at airports or for online transactions. Ditto, in the second case, with photos or images of synthesis, which are more and more realistic.
Tracking down “proof of life”
But parades do exist. “The trend now is to associate facial authentication with technologies that seek ‘proof of life’,” says Raphael de Cormis. This can include spotting the micromovements of the face, checking the texture of the skin (to differentiate it from the latex) or even “detecting microbattings of the heart, invisible to the naked eye, by analyzing the images.”
Simply put, some authentication systems require the person to take a specific action, such as uttering a word, turning your head or blinking. But the arrival of “deep fakes”, these fake video sequences created from recordings of a real person, could call into question this parade. In May 2019, Samsung researchers published an article explaining that it was possible to create animated avatars of individuals from a few photos of their faces using GAN (“Generative Adversarial Networks”), a technology artificial intelligence.
Hence the need to multiply the factors of identification, by combining several technologies, for example facial recognition and fingerprint analysis or sending a secret code by mobile phone. “This leads us to make informed choices about the applications we want to entrust to artificial intelligence. We need to know the risks, and know that fighting these risks requires us to always improve systems in the face of threats that evolve in return. It’s an arms race,” says Françoise SouliéFogelman.
As for the regulation of identification systems, such as Clearview, it is the responsibility of states and justice. YouTube, Facebook and U.S. individuals have threatened to take legal action if the company continues to use their images without permission. ■
“Everyone is freaking out about being recognized, but what really worries me is when the machine doesn’t recognize me, and especially when it recognizes someone who pretends to be me.”
IE-FOGELMAN Artificial Intelligence Researcher.