Skip to content Skip to sidebar Skip to footer

Source: Capital as of 07-04-2020

Born sixty  years ago,  artificial  intelligence, more and more  efficient,is  now an integral  part  of  our daily life.  While  some see it  as  a  promise of emancipation,  others  point to the lack of ethics  and morality of the programs. Between illusion and  reality,  investigation  at the  heart  of  algorithms…

You’ve certainly  had  to  run a test – called reCAPTCHA  –  during  an  online   query  or purchase to certify that  you’re  not a robot. This is  a  way  to  protect websites from spam and other  abuses.  What  you  may  not know is  that by checking these  images of bridges, trucks or  tricolor   lights, you  are helping  to improve algorithms, especially  that of Google Street View, Google  being the owner of  reCAPTCHA. Similarly,  when you tag  a  friend on Facebook  or  you  like  a video on YouTube, and more  generally,with each  of  your  clicks,  you  enrich  an artificial intelligence. 

Ditto, when  you  use    any  of  your connected objects  –  soon  6 per  person    on average according to  the  forecasts of the firm Gartner – Smartphone,  watch,speaker, TV  or even car and  even  refrigerator. We are  all artificial  intelligence trainers  that permeates  our lives and  yet  we  have a hard time  understanding  this  technology  born a little  over  sixty years ago.  How  is  AI manufactured  and  what challenges   does it pose?   Diving into  the other side of the scene.

If the horizon  of  researchers  is   to recreate the  mechanisms  of the  human brain  in machines, we  en  are still far from it. Because machines  are  incapable of  reasoning. “They just   bring out  what they’ve been taught,” says Luc Julia, vicepresident  of  innovation at Samsung,  also known  as one of  the creators of Siri,  Apple’s voice  assistant. The author of  Artificial Intelligence   does not  exist (First Editions, 2019)  prefers  to talk about “enhanced  intelligence.” AI  is simply  the  latest wave of digital technology,”  confirms  Bruno  Sportisse, president   of  INRIA, a research institute   for digital science and technology, which brings together  200 research teams, half  of  which work  in the field of  this  discipline.  

Long confined   to universities  and  the military  before  being  invested  by web giants  – Google, Facebook, Amazon, Microsoft  or  Apple  in the West; Baidu, Alibaba, Tencent  in  China ,  AI  now permeates all  sectors of the economy to the point of being  dubbed  “the electricity  of tomorrow”. Businesses find  a way to  delegate  difficult  and  repetitive tasks to the machine.  They discover  virtues  in optimizing  the customer relationship  en  by creating more and  more natural interfaces..  They  also see  it  as  a valuable decision-making tool,   in  banking, insurance, healthcare… Not  to mention  the field of “smartobjects”,  from the autonomous car to the  connected refrigerator, which invade  our daily lives..

Why such a  tel  success?  Because  in the last  ten  years  each  of the major components of AI  has  seen   tremendous  progress. Algorithms,  the “kitchen recipe”  based on statistics  and  probabilities that  allows the machine to  imitate  human behavior, are becoming  more and more sophisticated. While the  available data  (the ingredients)and the computing power needed  to  grind the whole thing  have  grown  exponentially. “AI  l’on is software  bricks    that  are  integrated with other  software  to  make  them  more efficient,”  says  Alban  Leveau-Vallier,a professor  at Sciences-Po who is preparing  a philosophy  thesis  on  artificial intelligence and  intuition  at the University of  Paris VIII.

Take the business of customer relations. They mainly incorporate language en   processing  tools  to  create  their  chatbots. Those  in finance, who  shuffle a lot  of  encrypted  data,  use  machine learning to  develop decision models.  Industrial manufacturers,  the other  hand,  mainly use robotics and artificial vision  to  design  their  machines. AI is everywhere,  but  it  disappears by integrating  into  the digital environment.  

At the root of this  meteoric progress is deep learning. Built on  neural layers,   this machine learning  technology  has revolutionized  AI by allowing software  to  “recognize”images,  automatically translate documents,  hear  and  decipher human speech  by reconstructing, in their own   nos way, our  perceptions. It  is   these deep learning-educated  algorithms  that  are, for example, able to drive autonomous cars  today. 

But to  be operational,  these  little jewels,  especially   those  that  rely on deep learning, need   thousands  of hours of training. Because,  unlike   humans,the machine  learns only by multiplying    observations on a large scale. “To  recognize  a cat with  95%  accuracy, it  needs  about 100  million  images of  that  animal,  where only  two are enough for  a child,”  says  Luc Julia. The more est abundant  and  relevant the raw material, the more  artificial intelligence improves.  “While algorithms  are  easily accessible  in  open source,  data accounts for  80% of the work required  to develop  an  AI,”  says  Alban  Leveau-Vallier. If  AI  is  the electricity  of  tomorrow, en the  data,  they,  are  the  oil..

There en  are  several  sources:  online   data, the Google Search database  contains for example  several  hundred  million  images; the  actual data, in the  case of the autonomous car,   it  is  collected  by  sensors arranged  on  all  types of roads; or  customer data with an  advantage  to  companies  with large  files. “This is  the  case  for insurance companies,  l’IA  banks,  hospitals,  transport or distribution specialists,”   says  consultant Olivier  Ezratty  in his opus on AI uses  in 2019. It is still necessary toil  extract,  label (i.e.qualify) and  validate  this  learning  data. .

Who does  this ant  work?   It’s us! “Training an  algorithm  still  requires  a strong human  contribution,”  admits  Aurélie  Jean,  a doctor of science and  entrepreneur who recently published  On  the Other  Side of the Machine (Observatory Editions). ). That’s not much to say. While in  many  companies,  the work of  collecting  and annotating content is carried out  in-house by  business experts,  digital platforms, they, mobilize a lot of small external hands. . Sometimes,  as  we have seen,  it  is the Internet  users who do the job for free through their l’a  online   activity; ; sometimes micro-self-employed,  paid  to the task,for example to surround  images  or to compare two  videos..  They  are  recruited remotely through  specialized companies such as Amazon Mechanical Turk,  Clickworker or Figure Eight. Antonio  Casilli,  a French sociologist  who specializes  in social networks,   denounces  in the first  case  the  hidden work of the”produsagers”and in the second case  that,  underpaid,of the “digital proletariat”.   

These en  “click-throughs” as  the author  of Waiting  for robots  (Threshold , 2019) are estimated to be between 40  million  and several hundred million,  mainly in Asia and Africa. The  holy grail?   Let the machine learn  on   its own, less  supervised  and  less data-intensive way.   Researchers  are working on the issue. They  test  several  avenues  and,  among  them,  reinforcement learning. .  Thus,in the  game of go, rather than  forming on a  history of games, the algorithm  tests  several  scenarios by playing  against  itself..  Promising..

In the meantime,  our  lives  are  increasingly en governed by artificial intelligence, whether it’s getting  a loan,  getting  a job or diagnosing  our  health. But  how can we be sure  that an  algorithm  is  impartial?   To the extent that it  is  designed  and  educated by a  human, it can  be expected  to  be perfectable. Examples  of  cognitive biases (created  by the  programmer   himself)  or statistics (generated by the  data  used)  regularly make headlines.  Amazon  had    to correct  its  recruitment algorithm  based  on the hundreds of  thousands  of resumes received over ten years. It  selected mostly  men who  had  made up the overwhelming majority  of  hiresin the past. .

In the United  States,  recidivism risk  assessment   software  in the  prison population  was  deemed unconstitutional because  it  overwhelmed black   people. The parade  against  these  sexist  or racist algorithms?   “Integrating ethics from the moment they are  conceived,”   replies  Telecom Paris’s professor-researchers, David Bounie  and Winston Maxwell, who  wrote  a report  in March 2019 on the issue of   bias. “Ethics, will  inevitably be local  as  the cultures differ.”

But there  is   still  a  safeguard  on  which almost  all  countries  agree,  that of”explicability”.  This  neologism  refers to the art  of breaking down  the  steps  of training  an  algorithm  to  better    identify biases and correct  them  or,  failing that, explain them.  It’s  not that  simple. Because, based on probabilities and  statistics,an algorithm  works  a  bit  like  a black box:  you enter  a  data  and  it comes out a result.. But  we don’t  know   exactly    what’s going on between the two… A  bit  like  in  en our brain: we  know  the processes  and modes of operation..  However,  it  remains  almost impossible to anticipate for sure an  individual decision.  In the United States,  this virtuous approach  to “explaining” algorithms  is  self-regulated.. ” For the past two years,  all  the big boxes, and  among them the GAFA, have set up  an internal ethics committee,” observe the two  researchers..  

In Europe and  in  particular  in  France, the law  regulates artificial intelligences concerning  the general public,  especially  in the  field  of  education and  health. “The result  of an  algorithm  must be able to  be understood  and  challenged,”  insists  David Bounie. In the  private sector, the concern for transparency  should  lead  “within  two years”,  predicts  Alban  Leveau-Vallier,to  a certification of  algorithms,  granted  by an  independent body  on the  model  of ISO standards.  What about  privacy?   Europe  is at the forefront with its  en     General Data  Protection  Regulation  (GDPR)  which came into force  in  2018,  requiring digital platforms    to inform  consumers  about the collection of  their  personal  data  and  its use. .

This text  has just  inspired   California, whose  Consumer Privacy Act  est has been in operation  since the beginning of the year. The aim is  not  to restrict   companies but “to avoid rejections towards this  technology”, explains  Bruno Sportisse,  promoting    a so-called  “trustworthy” AI. “.  Bathed  in  algorithms  since   childhood, the  younger  generations will be all the  more  sensitive!  

What threats to  jobs?  

The number  of studies trying to assess the impact    of  AI  on  employment is no longer  counted, threatening  not  particularly workers but employees and even managers. And according to  these forecasts,  the gaps are vast. From the most  pessimistic,  that of two researchers  from the University of Oxford named Frey and Osborne who  estimated  in 2013 at 47% the  number  of jobs technically “robot is able” in the United  States by  2030, to the most optimistic,that of  the  Forrester Institute in  2016,  estimating  at 6% net job  losses.  In  the meantime,  an OECD  study   recommended accounting for   tasks  that could  be  automated,  rather than   jobs. Repetitive  chores would be delegated  to the machine  as a priority  to  direct   employees to more rewarding tasks. .

In such a  case,  the challenge  is  to  re-qualify  these  people by accompanying  their  rise  in  skills with training. The darkest scenarios   fear the appearance of”a useless class,”  as  the  historian  Yuval Harari calls it. They  advocate  the allocation of a  universal income to alleviate tensions. In investissement  contrast,  experts such  as Taiwanese  Kai-Fu Lee, who  was  Google’s president   in  China  before  founding  his own private equity firm, are betting on  creating  new  jobs,  especially in the service sector.  

The AI’s  ecological challenge

It is difficult to assess  the  carbon footprint of AI,  drowned in the mass of  energy consumption  of  all  digital uses. Some experts  have  attacked  the  subject by the small end of the eyeglass. In her  Anatomy  of an  AI System, Kate Crawford, a Microsoft researcher  and  founder  of the AI Now Institute in  New York,  dissects  the entire  manufacturing process  for  Amazon’s Echo  connected speaker.  To  conclude  the ecological aberration  of  this  small  object. In the  same  vein, researchers  at the University  of Massachusetts  calculated  that training  a deep learning model for four to seven  days  to  educate a personal assistant to recognize  your  voice  and  interpret  a  voice command  consumed the equivalent of 5 cars during  their  lifetime!

We also know  that  AI  fuels  big data. The proliferation of data centers –  several thousand worldwide and more than 130  in France – is expected to increase  their  weight in  global energy consumption, currently  estimated at 4%. The  parade? Run  these  with  renewable energy.  And above all,  deploy   low-energy   AI learning.  “There  is  an  active area  of research  on  this  subject,”says Alban  Leveau-Vallier..

Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.