Caselaw

Appeal Petition/Administrative Claim 63194-08-25 Nevo Ben Cohen v. Ramat Gan Municipality - part 4

March 22, 2026
Print

 

 

Noam Sohlberg

Vice President

 

 

Judge Ofer Grosskopf:

I agree.

 

Ofer Grosskopf

Judge

 

 

Judge Gila Kanfi-Steinitz:

I concur with the judgment of my colleague, the Vice-President v. Solberg.  My colleague described the failures that occurred in the municipality's conduct, and his judgment sets a warning sign regarding the manner in which the administrative authorities use artificial intelligence systems.  Indeed, many benefits may arise from the use of artificial intelligence tools by the administrative authorities – both in improving service to the citizen and in saving public resources.  However, alongside these benefits, my colleague rightly emphasized the danger inherent at this time in blind reliance on this technology.

In light of the rapid pace of development of artificial intelligence tools, and the changes they are expected to bring about in the public service, I wanted to make a few comments about the challenges posed by decision-making using artificial intelligence tools, from the perspective of administrative law.

  1. First, it seems that there is a wide gap between the submission of court rulings that rely on non-existent references, the result of the "hallucination" of artificial intelligence, and the adoption of an administrative decision based on these tools. My colleague, Vice President Sohlberg, blamed the administrative failure on the grounds of arbitrariness, but it seems that in some cases the problem may go down to the root of the authority itself.  After all, more than once we agree that it is a basic rule that an authority is not entitled to dissolve at its discretion or delegate authority without explicit authorization (High Court of Justice 2303/90 Filipovich v. Registrar of Companies, IsrSC 46(1) 410, 420 (1992) (hereinafter: the Filipovich case); Baruch Bracha, Administrative Law,   2, 154 (1996)).  From this perspective, there are those who point out that the integration of artificial intelligence systems into the decision-making process challenges this prohibition, since in practice, individual decisions, and even decisions relating to the formulation of general policy, are "delegated" to the algorithm and implicitly to the private suppliers who develop it (Deirdre K.  Mulligan & Kenneth A.  Bamberger, Procurement as Policy: Administrative Process for Machine Learning, 34 Berkeley Tech.  L.J.  773 (2019) (hereinafter: Mulligan & Bamberger);Danielle Keats Citron, Technological Due Process, 85 Wash.  U.  L.  Rev.  1249, 1296-97 (2008) (Citron).
  2. Indeed, it is possible that the use of artificial intelligence tools will be made with the help of them and nothing more – assistance that must also be careful and controlled – without the administrative authority being dismantled from its professional judgment (and compare: High Court of Justice 38379-12-24 Anonymous v. Sharia Court of Appeals, Jerusalem, paragraph 26 [Nevo] (February 23, 2025) (hereinafter: the Anonymous case)). However, the picture changes where the decision-making itself is transferred to the AI tool.  In this situation, it is no longer possible to view the use of the system as a mere 'technical assistance', a kind of use of a calculator, a database or a drafting tool (for more on the distinction between delegation and assistance and consultation, see: Filipovich, at pp. 422-424; Dafna Barak-Erez, Administrative Law,  1,  178-180 (2010)), but rather as shifting the center of gravity of the exercise of judgment from the human being to the tools of artificial intelligence.  Transferring the decision to an artificial intelligence that exercises a kind of "discretion" of its own may in some cases amount to an improper dissolution of discretion or prohibited delegation.

Thus, for example, the case before us occurred, in which the ISA based its decision on a non-existent Director-General's Circular, the result of the "hallucination" of artificial intelligence, while referring to its various sections and relying on detailed "quotes" from that invented circular.  There can be no dispute that if the decision had been made by a human entity, the outcome of the administrative proceeding would have been completely different.  Moreover, if the decision had been delivered to the appellant without the detail that was based on it, it is possible that he would not have been able to understand the failure at its foundation.

  1. In order to avoid an improper dissolution of the authority and discretion of the administrative authority, my colleague determines, rightly, that the administrative bodies must exercise human control over the system's product and examine it before making a decision. The spirit of this approach, known as the "human-in-the-loop", is also used in parallel legal systems (see, for example, Article 22 ofthe European Union's General Data Protection Regulation; for more on the regulatory approaches adopted in comparative law, see Ben Green, The Flaws of Policies Requiring Human Oversight of Government Algorithms, 45   L.  & Sec.  Rev.  1, 4-7 (2022) (hereinafter: Green); Aziz Z.  Huq, A Right to a Human Decision, 106 Va.  L.  Rev.  611, 620-27 (2020)).

Indeed, human supervision may, among other things, increase accountability and in some cases improve the quality and accuracy of system products (the Ministry of Innovation, Science and Technology and the Ministry of Justice). Policy, Regulatory and Ethical Principles in the Field of Artificial Intelligence 55-56 (2023) (hereinafter: Artificial Intelligence Policy and Regulation Document)).  However, the current research literature shows that a requirement for human supervision, in and of itself, is far from providing adequate protection against AI failures or errors; and warns against adding a human factor to the decision-making chain, without taking into account the complexity of the interface between humans and artificial intelligence tools (see the article to which my colleague referred: Rebecca Crootof, Margot E.  Kaminski & W.  Nicholson Price II, Humans in the Loop, 76 Water.  L.  Rev.  429 (2023).  See also: Lemon, at pp. 1271-1277; Green, at pp. 7-11).  For example, this interface often suffers from cognitive biases that challenge the effectiveness of supervision.  Some human supervisors tend to rely too much on system outputs instead of exercising independent judgment (a phenomenon known as "automation bias").automation bias)).  On the other hand, studies indicate an irrational tendency to postpone a decision made by an artificial intelligence system, even in the absence of justification for doing so (a phenomenon known as algorithmic aversion).algorithm aversion); For more on these biases, see: Marina Chugunova & Daniela Sele, We and It: An Interdisciplinary Review of the Experimental Evidence on How Humans Interact with Machines, 99 J.  Behav.  & Exp.  Econ.  1, 8-9 (2022); Artificial Intelligence Policy and Regulation Document, p. 57-58). It should also be noted that human involvement, which is done without the necessary means and knowledge, often lacks a real ability to identify and correct the errors or biases of artificial intelligence.  In other cases, it may even detract from the quality of the product obtained by the artificial intelligence and offset its advantages (Artificial Intelligence Policy and Regulation Document, p. 57-56; Green, at p. 8; Amit Haim, The Administrative State and Artificial Intelligence: Toward an Internal Law of Administrative Algorithms, 14 UC Irvine L.  Rev.  103, 142 (2024)).

  1. In light of these insights, which are presented in a nutshell, one should be careful not to read the "man in a loop" model as a mere qualification for formal human supervision. It is clear that only supervision and passive examination of the system's products should not be satisfied, and more significant involvement on the part of the Authority should be demanded, beyond the obligation that my colleague insisted.  This increased obligation is not made of a single piece, and it can take on and simplify according to the nature of the administrative decision and the characteristics of the system used.  Without exhaustion, this involvement may express, inter alia, an informed use that includes sufficient familiarity with the system's advantages, shortcomings, and limitations; And more importantly - the competence and authority to reject its recommendations in the appropriate cases (and compare: State v.  Loomis, 881 N.W.2d 749, 768-70 (Wis.  2016); Green, at pp. 6-7).  It may impose a duty to continuously monitor the resulting products, while critically examining them, inter alia, in the way they are compared to similar human decisions.  In particular, with regard to issues that have a real impact on the individual, as in our case.  It is possible that this involvement should begin at the stages of designing the system and the process of learning it – while adapting its characteristics to the rules of administrative law (see: Mulligan & Bamberger, at p. 773).  Finally, it is appropriate that prior to the implementation of the systems, the state authorities should conduct a preliminary procedure in which it will examine whether the required decision is indeed appropriate for the use of an artificial intelligence model; and if the specific model is sufficiently reliable and appropriate in its characteristics to assist in making that decision (and for the necessary considerations in this regard, see: Citron, at pp. 1303-1304; Green, at pp. 11-15).  Of course, these conclusions must be reiterated from time to time in light of the technological changes and the tools available for use.
  2. These comments are far from exhausting the range of fundamental challenges and considerations that should be taken into account when using artificial intelligence tools by the public administration. These words are a call to the questions that will surely occupy the administrative authorities down the road, and to the rules that would be good if they were designed "heel on the side of the thumb."  It goes without saying that the proper work interface between humans and artificial intelligence depends on the circumstances.  It must be designed with all mind, care, and taking into account the frequent technological changes that characterize this technology.  It is also not impossible that in view of the accelerated pace of development of technology, the means detailed above will one day become obsolete, and the substantial obligations imposed on the Authority will be satisfied in alternative ways that we have not yet known.  As I had the opportunity to point out in another case – "We must take into account the possibility that concerns that we are required to address today will over time receive a satisfactory technological response; and that the rules are appropriate for this time, they will give way to more current and appropriate rules" (The Anonymous case, at paragraph 26).

For now, however, the adoption of AI tools requires "To make responsible, careful and critical use of this technology, to understand its capabilities and limitations in depth, and to be updated from time to time regarding its weaknesses and strengths" (עניין Anonymousin paragraph 26).  These words, which were said in relation to the duties of lawyers, are all the more appropriate when we are dealing with state authorities.

Previous part1234
5Next part