Skip to content Skip to sidebar Skip to footer

Source: Dalloz actualité as of 14-04-2020


Cross-eyed glances of a lawyer and a mathematician.

This dossier, proposed by Laurence Pécaut-Rivolier, advisor to the Court of Cassation, and Stéphane Robin,research director at INRA, was separated into threeepisodes.

by Laurence Pécaut-Rivolier and Stéphane Robinon April14, 2020


Artificial intelligence (AI) is the topic of the moment. An inescapable subject and in direct contact with reality,including in the current health situation. It is enough to convince yourself to consult the  decree Datajust1 published  just a few days ago . It is therefore obviously no longer a question today of whether there is a an opportunity to introduce these techniques into the judicial system. This is already the case,irreversibly. AI is already everywhere. But it is very important, at a time when the decisive choices will be made on how to introduce AI into justice, to take the time to ask themselves some questions that might  determine these choices..

 These issues are all the more fundamental because justice, it has not escaped anyone,is behind the AI. She has been refractory for a long time, for good reasons as well. She has  long refused to do so, first of all, because she sees it as the antithesis of the philosophy that traditionally guides the judge. AI aims to streamline and massive treatment. Justice is supposed to give each case an individualized response that takes into account the particularities of the case.. But the late arrival in AI is linked to other factors, less symbolic: the question of means is obviously not absent, but also, probably, a sometimes irrational fear that the machine will sound the end of the sacrosanct independence of the judge..

This delay in the penetration of AI in judicial functioning puts, paradoxically,justice in a situation of much greater vulnerability than other sectors in the face of technological advances.  She did not have time to develop an  overall reflection,  she did not have the opportunity to get used to and gradually seize the new tools. That is why it is fundamental today to take this time. Take the time to ask yourself what we want, where this can lead us, and what precautions  should be followed..

This study, co-written by a magistrate and a mathematician,aims to give a modest, but which allows to initiate this reflection2. Because one thing is certain:  to bring justice and AI together, we must first  meet the men and women who operate them. 

l’In this reflection, we will leave aside the question of open data, that is,the question of access to court decisions, which focuses on the problem of anonymization or pseudonymization. It raises a question above all technical, but more, at this stage, ethics. We nous will focus on the objectives and conditions of the use  of AI in court decisions.

1 – The use  of AI in justice, why?

11 – Goals

111 – Actors’ objectives:  convergences and divergences 

The introduction of AI in justice is specific in that it is expected by  many  actors,who however  hope for very different advances in reality. There are many actors in the justice system. There are those who officiate internally, judges, clerks. There are those who work with the judiciary, lawyers, bailiffs and judicial assistants  in general. There are those who use or depend on justice, which is referred to in judicial jargon as “litigants”, with obviously multifaceted profiles. Finally, there is a  the Department of Justice, both designer and user of justice tools.

However, if all the actors  call for – more or less loudly – the development of AI within the justice system, they  are actually expecting various results, which do not always converge. The expectations of  the introduction of AI in justice can be summarized in four main  areas.  

1111 – Legal security

First,most often heard in  recent years, there is the hope of  streamlining the judicial decision, allowing better  uniformity and therefore better legal. 

Of course, legal certainty is part of the law.

Legal security is a tautology. As one eminent author noted: “The formula indeed  sounds like a kind of redundancy, as it seems obvious that a right  that would not ensure the security of the relationships it governs would cease to be one. Can we imagine a right that would organize insecurity, or even  make it possible? 33

However, judicial decisions are,by their very nature, subject to variation.

They are first of all for conscious reasons,which are due to the need,for the  judge, to adapt to each situation.

They may also be for unconscious reasons, related to the person of the  judge. These are assumptions related to his upbringing, his beliefs, his  personality,which can have even an unintended impact on the decision.. Numerous  studies show that very varied and sometimes surprising data – The Judge’s breakfast Nombreuses 4, his fatigue5, the media influence 6, his egocentrism7 to his various prejudices 8 – can influence the decision taken.

These two grounds for variation in the court decision, which, for the same situation, can give rise to two different decisions,  are often confused by those who evoke,to judicial hazard. 

Faced with what is often indiscriminately called hazard,multiple safeguards garde-fous exist. Collegiateness, first, essential element of internal control, jurisprudence, second,which unifies the interpretation of texts, scales,which smooth the answers in certain situations of prejudice.. But these answers remain very crude since the framework thus put in place remains general. AI could help refine this unifying function and thus reassure those who feel it is essential to have justice that guarantees the identity of a solution, in a conception of reliability that is based on equality and certainty. Any situation A will result in response B.

1112 – Decision support

Almost contrary to this first  objective,the use of AI can tend to help with a response as adapted as possible to each case, considered  necessarily unique. It is no longer a question of seeking the same answer for en all, but on the contrary an answer that takes into account all the characteristics of the situation, not important – or on the contrary hoping – that the result is different in each of them,  but for objective and justifiable reasons..

We know that, since the  French en Revolution, in France, it is forbidden for the judge to issue settlement judgments. The civil code establishes as a principle in Article 5 that it is forbidden for judges  to pronounce “by way of a general and regulatory provision on the cases before  them” and in Article 1355 that “the authority of the thing judged takes place only with respect to what has been the subject of the  judgment. The thing that has to be asked must be the same;  that the application is based on the same cause; that the claim is between the same parties, and formed by them and against them in the same quality.” In other  words,the judge can never rely on another decision to rule in the case before him,  he must decide on the basis of the specific elements of each case, and the judgments he renders have only a relative authority,  limited to the case on which the judge rules9.

This obligation to adapt to the case in question explains that the first expected requirement of a court decision is based on its motivation: “The quality of the decision depends mainly on the quality of the motivation.” 10 It ensures that the facts of this case have been examined and taken into account,and specifically referred to the rule of law applicable to  them.

Making a decision appropriate to the case does not, of course, mean making a decision without a legal framework. . Any decision must be the implementation  of this legal framework. The difficulty of the d’en judge’s work, the famous judicial syllogism,  is therefore,in each case, to identify the situation, to determine the applicable legal framework, and to draw the consequences according to the specifics of each species..

AI can help in this work. By providing comprehensive databases  and effective search systems, it enables the judge to benefit from operational tools to accomplish his mission in all its diversity..

1113 – Improving the quantitative processing of files

The slowness of the judicial response is probably one of the strongest criticisms 11 of  the judicial institution. Criticism is not new. It is found, ritual,in the literature of all eras12 and it is expressed in all democratic countries. . It is that justice is inherently slow. It is so because  its credibility is based on compliance with procedures – acts,deadlines, respect for the contradictory – cumbersome and time-consuming. It is also probably a deliberate  attempt to discourage the usual litigants, compulsive litigants, and to remain the solution of last resort in disputes or  disputes that society cannot resolve other than in court.

While not new, this criticism n’en remains a major concern, unbearable for those who have no  choice but to depend on the  judicial institution,  especially for an act of their daily life; divorce, neighbourhood dispute, or whose life is suspended pending a court decision,  civil or criminal13.. However, among the major reasons for the slownessare  the means of justice, including the number of cases per magistrate and clerk.14 In this context, the use  of AI is a hope to speed up judicial work. It may  then be a matter of streamlining the processing of procedures upstream, or allowing an accelerated response to cases  considered “mass”, such as unpaid consumer credit or traffic violations, even if it means leaving the possibility of exiting the lot of cases considered to have. This is a better quantitative response objective that is being pursued here, especially by people in charge of court management.  

1114 – Predicting  the judicial response

 Very close to the objective of securing the judicial response, the desire to achieve more predictable court decisions differs in the idea that it is not necessarily a question of ensuring a single response but of being able to predict, in view of the different circumstances, which may take into account certain data  considered to be random so far,what response should actually be given to this particular situation. . According to  Professor Bruno Dondero’s definition 15, the aim is to “attempt to predict with as little uncertainty as possible what the response of Jurisdiction X will be when confronted  with case Y”. Predictive analysis  can thus be interested in the analysis of criminal risks, the  amount of money allocated by the courts, the chances of winning a trial…

Predictability makes it possible to place the litigant in the real  position of actor, able to make informed  choices. It would also be a conciliatory factor, since  the parties can better measure the content of their commitments and concessions.16.

As we can see, the expectations of the world of justice en  d’IA can be very different in different  actors,and not necessarily convergent.. It is therefore a first challenge  to decide which or what expectations  should be given priority, which are priorities,to know which tools need to be developed. Because the tools can only be  based on clearly set goals upstream.

12 – The concept of AI in justice

121 – What do we call AI and algorithms?

The word algorithm,still abstruse to many a few years ago,  has invaded the common language in a remarkably short time. As with any technical word, this popularization  has changed its meaning and definition. Most dictionaries agree that an algorithm is  defined as a finite and unambiguous suite of operations or instructions to solve a problem class.  A famous example is the division algorithm,  taught in primary school, which allows to calculate the ratio between two numbers with a given precision. Algorithms have been designed to perform tasks of a wide variety of natures. Thus, the extraction of a large database of examples that meet a set of criteria in itself calls en for an algorithm. In this case,the main purpose of algorithmic work is to ensure not only that the answer provided is accurate but also that it will be returned in a short time, even when the database is very large.

 In the case of  artificial intelligence, the word algorithm refers indifferently to the process that  provides a response to a query using a calculation formula and the so-called  “learning” process that precisely established this formula.  It is this second process that justifies the name “artificial intelligence” in the sense that it is based on the analysis of a large number of examples from which the machine is supposed to  “automatically” extract  decision-making rules. Here we describe l’IA some examples of tasks that could be entrusted to AI algorithms  as part of legal work.  These examples  are described as an introductory and illustrative  one: it is certainly not a question of titre deciding whether these tasks should actually be entrusted with an algorithm rather than  a justice actor.

122 – THE different interventions of AI in court decisions

1221 – Databases

Before presenting some examples of possible use of AI, it should be remembered that any automated processing requires,beforehand, to enter, manual or not, the data available in a computer database. . This first organization of data  in a workable form is not neutral insofar as  it usually requires encoding information, for example by means of keywords in order to then organize, compile and analyze them.

1222 –  Decision support

A first task that might be tempting to entrust to the AI would be to  assist a judge in  the investigation of a case. Thus,the investigative work d’instruction of a judge in charge of a civil case could be informed by the more general knowledge of the cases and judgments rendered by the French courts in this matter. The objective of decision support is to user, through automatic tools, without making the decision for him. In our  example, the existence of decision-making databases and interactive computer tools can enable a justice actor to extract a set of cases similar to the one he is interested in..

This  extraction itself can be left entirely to the initiative of the user, who then has the responsibility  to determine the criteria that best characterize his case (applied texts, circumstances of the case,, mots- keywords, etc.) to extract from the base those similar to him  according to these criteria. This extraction work could also be entrusted to an  AI algorithm whose objective would be  to define a typology of the cases recorded in the  database,without giving itself a filter criterion or prior classification and provide the judge in an “automatic” way a set  of cases close to the one occupying it, without having  had to specify the criteria of resemblance himself. 

1223 – Decision Prediction

A second, arguably more ambitious, task would be to propose to a magistrate a decision in relation to the case he must deal with.. Thus, in the case of law, in the absence of a compulsory en  scale, one might want to use AI to predict the compensation corresponding to  a particular dismissal,  based on a database consisting  of the judgments rendered by all the councils of prud’men (and/or courts of appeal)of France on dismissal cases. 

Again, the magistrate could retain control over this operation by defining  the criteria he believes are relevant to extract situations or files similar to the one he must deal with and thus obtain at least a range of allowances consistent with those already awarded. A second approach would be to use a learning algorithm to establish a  “prediction  formula” that would combine the description of each case  with compensation  as close as possible to that which was actually  awarded. It would then be sufficient to submit the case of study to this formula to obtain a prediction of the “right” allowance. In this  case, this prediction formula would be opaque to the user but supposedly respectful of all the decisions used  to determine it.

1224 – Decision analysis

The use  of AI can also be considered not in the process leading up to each court decision but to analyse all of these decisions in hindsight. Post-decision analysis is an indispensable work if we are to be able to ensure their coherence,the possible changes in judicial conceptions they reveal,and, through more advanced research, the biases they conceal. This would  allow us to look at the analysis  of judgments made in relation to consumer credit litigation by the courts of different departments or regions..

Such analyses may be based on criteria chosen directly by the analyst who will decide to cross the  amount of credit taken out and the region to study possible differences  in geographical treatment,  or the socio-economic characteristics of those who committed  the same offence in relation to the sentences handed down.. These analyses are based entirely on the expertise and judgment of the operator who retains the initiative to make such a comparison rather than another. The criteria of the analysis then remain completely explicit.. But  such an analysis can also be claimed to be conducted  “automatically” by entrusting  an algorithm with the task of forming a typology  of decisions and/or courts. As in the case of the prediction,the rule for distributing  data in different categories  will be opaque, in the sense that it was not designed by the user..

13 – The choice of methods

Previous examples  show some legal tasks that could be assigned to algorithms so that they are performed more “automatically.” The aim is to try to  describe how such algorithms can be conceived and, above all, what arbitrary or subjective choices necessarily underlie them. The apparently automatic 17 nature of the operation of these systems is in fact based on a series of  perfectly human decisions, which it is therefore important to master.

131 – Empirical bias

The determination of a list of divorce cases similar to a given case or the evaluation of a severance pay are tasks historically entrusted to humans that would be delegated, at least in part,to algorithms.. They will therefore  aim to accomplish a task in such a way as to produce a result comparable to that produced by a human.

There are typically two ways to approach such a problem. The first, which can be said to be mechanistic, is to encode (or model)the intellectual process of the human who would have to accomplish this task. The second, which can be said empirical,aims to produce results similar to those provided by a human without seeking to reconstruct his approach.

The most prominent algorithms in  AI for several years are resolutely adopting an empirical bias:  it is not a question of copying the reasoning (for example legal)of a human but only of mimicking his decisions. This bias  implies that these algorithms are not based on modeling the intellectual process of  a human, but on the learning process by which the machine “determines”  its own decision-making rules based on numerous examples,which take its place as experience and which it will try to mimic as best it can.

This bias is resolutely pragmatic in the sense that,from this point of view, only the result counts. If it is to  produce results similar to those that a human would produce, rather than  trying to reconstruct human reasoning, we might as well try to determine a mathematical formula that will minimize the differences between results originally produced by humans (and recorded in a database) and those provided by the machine. The learning problem is thus transformed into an optimization problem:  given  a criterion that measures the fidelity of predictions to observations, the learning algorithm aims directly to optimize this criterion. The contribution of optimization (which emerges  from both mathematics and computer science but also borrows from physics) to the success of AI is thus decisive..

It can be observed at this stage that, from this point of view, mechanistic  approaches are beaten in advance by empirical methods  since they are precisely aimed at optimizing the criterion that should allow to compare their respective performance. It thus appears  that AI naturally tends to impose its criteria  (in this case, maximum  proximity to decisions already made)to the detriment of others, less  mathematical, such as the need to justify a decision, as we will discuss it.

132 – Typical tasks

Among the tasks that can be entrusted to AI algorithms are often “unsupervised”  problems from “supervised”  problems,  the former typically having a descriptive objective when the latter generally have a predictive focus. .

1321 – Unsupervised learning

The term “unsupervised”  refers to problems for which, during the learning phase, the algorithm does not have exogenous information to assess the quality of the results it produces.. This is the case of the ex nihilo definition of a case  typology from a large database of consumer credit litigation or the search for divorce cases similar to a given case. .

In This Case The Type Finally Exhibited By Algorithm Do Can Obviously Not Be Perfectly Intrinsic The Sense Where She Would Based on the Only contemplation of DataWithout No Grid analysis Particular. She’s resting Necessarily On A Representation « Mathematical Data (In occurrence, Texts) and on a Criterion (Also Mathematical) Measuring The Close Yes The Similarity between two Texts. The effect stunned Product by the performance of the Algorithms Comes In Part Because of the fact that that he Is Hard to imagine that we Can mathematize The Comparison two Texts. This Is However possible and to tell the truth Enough Simple: one Can By Example Represent A Text As a simple bag of words (bag of words), Forgetting Any Structure Syntactic And Measure The Close between two Texts In Comparing Just The Frequency Use Different Words. We Uses Then at one Algorithm which aims to Set A Categorization « Optimal Texts, it’s-to say to Determine Groups Texts Similar Between Them The Groups Beingon the contrary, the most Separate possible Each Other. We’ve got Have As well classify articles from Press on the Only base of Their Content Text And See that the Type As well Obtained Corresponded The Subject Which They were noted (sport, Economy, culture, politics International etc.) Then Same that the Topic newspaper in Which one They Had Been Published had not Not Been Taken In Account For Determine This Type, Or Same The List These Topics18. The Success Experience Is Had, a year share, in the Availability Computer one of the Very big Number articles Press And other on the other hand, to the fact that Terms Used in the Different Fields Thematic Are Enough Different so that analysis Only Frequencies words Allows Distinguish.

To accomplish this “automatic” task, it was necessary to choose, in  principle, a representation of information (a text – a bag of words), a measure of similarity (the comparison of frequencies of use)and a criterion of optimization measuring how distinct the groups of texts are between them (which we do not detail here).).

1322 – Supervised learning

Supervised problems are those for which validation of predictions is available during the learning phase.  This is the case with the licenciement assessment of  severance pay for which  a database would include both the description of the cases and the amount finally allocated at the end of the judicial proceedings. It is then a question of determining a  predictive function that will associate a decision (in this case,an amount)with a case, which function is supposed to mimic,at best,the decisions recorded in the database. 

Again, the learning process is based on a representation of the cases  (for example by keywords or as a bag of words) but also on the choice of a form for the predictive function and on a criterion that measures the proximity between the prediction provided and the true answer. There are a plethora 19 of predictive functions with obscure names (support vector machine),  evocative (neural neurones networks), or poeticréseaux  (randomforests). ) The quality of prediction relies heavily on the flexibility of this function, i.e. on its ability to account  for the link between the description of the case and the decision without over judging its  form. The counterpart  en of this flexibility (which contributes greatly to the performance of these  methods) is the very opaque nature of the resulting prediction formulas,  which are  generally  futile to try to have an intuitive understanding.  Thus, a deep neural network can involve tens of thousands of coefficients whose combination provides  prediction (determining these coefficients is precisely the objective of the learning algorithm). 

Supervised learning can make a prediction not in the form of a one-time value (for example,an amount) but in the form of a distribution of values (interval or even distribution of probabilities). We do not enter here into the specifics of algorithms providing this type of  response insofar as they are,more often than not, also uninterpretable.

1323 – Hybrid situations

There are obviously  many hybrid tasks, between supervised and unsupervised. A judge specializing in divorce cases could thus use an unsupervised algorithm  to automatically extract from a  database a list of cases similar to the one he must deal with and, moreover, interact with the algorithm by  validating,  on a case-by-case  basis,the elements of that list. A supervised learning algorithm can then come  into play to revise the criteria that made it possible to determine the initial  typology based on the validations and made by the user. This interaction with the user makes learning “supervised” in the sense that they benefit from additional information. . The learning algorithm then aims to revise the classification formula in order to maximize the validation rate by the judge,which is a new criterion of optimality..

The procedure that results from such  learning remains opaque insofar as  the formula for drawing up the list of relevant cases is not easily but gradually tends to mimic the criteria of choice of the judge who feeds it..


  1. Dec.2020-356, March 27, 2020, JO 29 March, v. P. Januel. Some believe that it is a matter of anticipating the multiple legal remedies  of the patients of the covid.
  2. A reflection already largely initiated for several years by great authors of judicial thought. In this regard,the authors of these lines thank Antoine Garapon very much for the exchanges in which he was willing to participate and which greatly nourished this article.
  3. J. Boulouis, “A Few Comments About Legal Security,” in International Law to Integration Law. Liber amicorum:  Pierre Pescatore,Nomos Verlag, 1987, p. 53, quoted by J.-G. Huglo,File: The principle of legal security,  Cah. Cons. Const. 2001, No. 11.
  4. Assumed as early as 1950 XXe by the American philosopher and judge Jérôme Franck (Court of Trial, 1950), the impact of meal or hunger on the judge’s decision was established by a study published in the journal Proceedings of the National Academy of Science (PNAS) (see: S. Danziger, J. Levav and L. Avnaim-Pesso, “What did the judge eat at his breakfast?” Of the impact of working conditions on the court decision,  Cah. just. 579).  The study examined more than 1,000 decisions  following  parole applications by eight different judges in Israel over a  ten-month period.  The judges judged  14 to 35 cases per day in three sessions: one from the beginning of the day to a snack break in the middle of the morning, a second from the morning break to the lunch break and a third from the lunch break at the end of the day. In  general, judges were more likely to accept parole applications at the beginning of the day than at the end, and the chances of his  application being accepted were even doubled when the case was tried at the beginning of the session rather than at the end of the session. In fact,the number of cases a judge had to deal with  during a session significantly affected his or her decisions. The eight judges studied followed  the same pattern,and refused  a total of 64.2% of the applications..
  5. Back. Judges under the influence, Cah.just. 501s.
  6. A. Philip, You swear not to listen to hatred or wickedness… Bias affecting court decisions, Cah. just. 563  .
  7. Overestimating capacity relative to average. The study authors find that 56% of judges think they are in a quarter of the judges least overturned  on appeal,and 88% think they are  in the best half, which  is mathematically impossible. Biases affecting court decisions, art. Prec..
  8. K. Diallo, Artificial Intelligence tries to correct racist biases in the justice system, Le Figaro, June13, juin 2019.
  9. The question of jurisprudence, as the main outcome of the decisions of the Court of Cassation, is not raised here..
  10. Advisory Council of European Judges (ECJE), Opinion 11 (2008),to the attention of the Council of Europe Committee of Ministers on the quality of court decisions; N. Fricero, “The quality of court decisions within the meaning of Article 6, 1, of the European Convention on Human Rights,” art. (p.,56); in the same sense,an author notes that motivation is “an essential guarantee for the litigant, as it is intended to protect him from the arbitrariness of the judge,” v. J.-P. Ancel, The drafting of the court decision in France, RID comp. 1998. It’s 852.
  11. In a January 2014 study commissioned by the Ministry of Justice, 95% of French people blame the judiciary for its slowness..
  12. From Aristophanes’ Wasps to the Kafka en Trial, to The Pathelin Master Farce or Balzac’s The Plaidesters.
  13. In 2017, the average time to obtain a court decision was nine months before the administrative judge, six months before the district judge, seven months before the high court, fifteen months before the council of prud’men and thirteen months before the Court of Appeal..
  14. 8,313 magistrates in France in 2017, 11.9 judges per 100,000 inhabitants, making France one of the lowest endowed countries in Europe, according to a Council of Europe study, 84,969 officers, for 2,609,394 civil and commercial decisions and 1,180,949 criminal decisions. .
  15. B. Dondero, Predictive Justice, prédictive Professor Bruno Dondero’sblog. .
  16. Information report d’information on behalf of the “Redress of Justice” Laws Commission, Senate,No. 795, April 4, 2017, p. 139 s.
  17. Hence the use of quotation marks when using the term so far..
  18. D.M. Blei, A.Y. Ng and M.I. Jordan (2003). Latent dirichlet allowance. Journal of machine Learning research, 3(Jan), 993-1022.
  19. L. Godefroy, F. Lebaron and J.L. Vehel, How digital transforms law and justicetowards  new uses and an upheaval in  decision-making. Research Report,2019.
Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.