Uncontrolled Use of Artificial Intelligence in the Relationship between the Citizen and the Administrative Authority
- One of the main rationales underlying the increased duty of fairness imposed on the administrative authorities is the dramatic power disparities that exist between the authority and the citizen:
"The individual and the government are not equal rights, they are not equal to power, and they are not equal in status. [...] The government holds in its hands a great deal of power, a great deal of power, and a great deal of wealth, until the individual – no matter how great his power, power and wealth may be – will not compare to him or resemble him. [...] The duty of fairness that the government owes to the individuals of society derives from the excessive power that the government holds, from the power of the government, because it is great. The duty of fairness is intended to serve – alongside other means – as a brake on power and a restraint on power" (HCJ 164/97 Contram in Tax Appeal v. Ministry of Finance, Customs and VAT Division, IsrSC 52(1) 289, 367-368 (1998); see also ibid., pp. 317-318; see also: Yitzhak Zamir, The Administrative Authority, vol. 3, 1634 (2014); Dafna Barak-Erez, Administrative Law, Vol. 1, 276 (2010)).
- This rationale implies that uncontrolled use of artificial intelligence, as in this case, is even more severe, when it takes place in the direct relationship between the authority and the citizen. Thus, when such use is made in the framework of a legal proceeding, there are two parties that mediate, at least to a certain extent, the power disparities between the parties – the citizen's lawyer and the court. In the context at hand, the significance of this is that even if such use of artificial intelligence amounts to a serious breach of a procedural duty imposed on the parties, as detailed above, the probability that this will have an actual impact on the outcome of the proceeding is not relatively high. This is because it can be assumed, in the first stage, that the lawyer will pay attention to the matter of deception – that is, those references and claims that originate not in the law, but in the 'hallucinations' of artificial intelligence – and bring the matter to the attention of the court; And it is certainly reasonable that in the second stage, even if the lawyer 'misses' those failures, the court will turn its attention to them and rule in accordance with the law, without that deception affecting his decision.
- In contrast, in the direct relationship between the authority and the citizen, these assumptions regarding the barriers that may limit the practical impact of the uncontrolled use of artificial intelligence do not exist. When a citizen receives a decision from an administrative authority that appears to be reasoned and based on legal reasons and references, in most cases, he is unable to reflect on it, and certainly not to discover on his own that those reasons are nothing but the product of the imagination of an artificial intelligence system. Hence, in contrast to the case of uncontrolled use of artificial intelligence in the framework of a legal proceeding, in the direct interaction between the authority and the citizen, such use may certainly have a real impact on the citizen's situation. In fact, in light of the above, it is reasonable to assume that in most cases, the Authority's decision will remain standing, and the citizen will accept it, even without knowing that he has been wronged. The probability of this is inversely proportional to the level of awareness, ability and resources of the citizen to attack the authority's decision, so that it is the citizens with the least awareness and resources who are expected to be harmed the most.
- This means that the uncontrolled use of artificial intelligence by the authority in its direct relations with the citizen is of an excessive and special degree of severity. This is in addition to the fact that conduct such as this also involves a breach of additional duties imposed on the administrative authorities. Thus, for example, a decision based mostly on artificial intelligence 'hallucinations' is a decision that is very difficult to see as a decision that fulfills the duty of reasoning (for the general difficulty in explaining and reasoning, with regard to decisions that are the product of an artificial intelligence system, see, for example: Hofit Wasserman-Rosen, "To It and Its Thorn: The Right to Informationality of Artificial Intelligence Systems," Mishpat, Society and Culture, 8, 215 (2025)).
- In addition, and further to the above, there is reason to believe that such a decision is, as a rule, arbitrary. As Justice Shamgar put it well, arbitrariness is "an act done by an authority, without taking into account the facts and reasons before it, relying solely on its governmental power. It is not the fraud that dominates the act of arbitrariness, but rather the lack of consideration and lack of attention" (High Court of Justice 376/81 Lugasi v. Minister of Communications, IsrSC 36(2) 449, 460 (1981); see also: Appeal of Petition/Administrative Claim 1930/22 Jerusalem Open House for Pride and Tolerance v. Jerusalem Municipality, paragraph 39 of the judgment of my colleague, Justice Grosskopf [Nevo] (October 11, 2023); For a similar definition of arbitrariness, which refers to the conduct of the enforcement authorities in the field of criminal law, see: Criminal Appeals Authority 1611/16 State of Israel v. Vardi, para. 71 and the references there [Nevo] (October 31, 2018)). It is clear that at the present time, a decision that relies 'blindly' on an artificial intelligence system falls within the scope of a set of decisions that are given without attention or consideration of the relevant considerations and data, and in fact, it is a decision that can even be said that, in essence, no discretion was exercised in respect of it; Therefore, it easily comes under the wings of the cause of arbitrariness. This is even more reinforced, in view of the close connection between arbitrariness and the lack of reasoning (for this, see, for example: High Court of Justice 6728/06 Ometz (Citizens for Proper Administration and Social Justice) v. Prime Minister of Israel, paragraphs 14-18 of the opinion of Justice Naor [Nevo] (November 30, 2006); High Court of Justice 143/56 Ahjij v. Traffic Supervisor, IsrSC 11 370, 372 (1957)).
I am of the opinion that practical conclusions can be drawn from the above on two levels.
- First, the aspect of the positive duties that apply to the administrative authorities, when it comes to the use of artificial intelligence. Obviously, this is not the place for an exhaustive discussion – or even close to it – on this issue. Let us not forget that we are dealing with an appeal that concerns legal expenses. However, it seems to me that it is possible to extract from this case some preliminary insights regarding this matter, with a forward-looking view.
- One insight is that as long as the level of reliability of artificial intelligence systems is in doubt, since they are still 'delusional' from time to time, it is not possible to accept a situation in which an administrative authority breaks away from its discretion and bases its decision exclusively, or almost exclusively, and without any control, on the output of such systems (the requirement for human involvement in certain decision-making processes, with an emphasis on decisions of administrative authorities, It is often referred to in the literature as a requirement for "human in the loop" – a requirement that can apply both at the decision-making stage, as well as in the earlier stages of algorithm design and system training; in the latter context, see: Niva Elkin-Koren and Maayan Perl, "Artificial Intelligence: Disruptive Technology in Israeli Law," Mishpat, Society and Culture 8, no. 11, 22 (2025)). This is especially true where it is a substantive decision, which has a real impact on the citizen (for an analysis of various aspects of this issue, with its many complexities and the many challenges it raises, see, for example: Rebecca Crootof, Margot E. Kaminski & W. Nicholson Price II, Humans in the Loop, 76 L. Rev. 429 (2023); Ryan Calo & Danielle Keats Citron, The Automated Administrative State: A Crisis of Legitimacy, 70 Emory L. J. 797 (2021); Amit Haim, The Administrative State and Artificial Intelligence: Toward an Internal Law of Administrative Algorithms, 14 UC Irvine L. Rev. 103 (2024); Ministry of Innovation, Science and Technology and Ministry of Justice Principles of Policy, Regulation and Ethics in the Field of Artificial Intelligence 53-59 (2023) (hereinafter: Policy and Regulation Document in the Field of Artificial Intelligence)).
- Thus, even if in the framework of formulating and drafting the decision, the administrative officials are assisted by an artificial intelligence system – and as noted, there is nothing wrong with this, in and of itself, – they must monitor, examine and verify. This is true both with regard to the exercise of discretion on its merits, and with regard to the reasoning underlying the decision (I will emphasize that the above does not constitute an expression of a position on the question of whether even in contexts where the level of reliability of artificial intelligence systems will be very high, and even exceeds that of human professionals, the administrative authorities will be able to make decisions without human involvement at all, especially in matters that have a significant impact on the lives of citizens; another vision for the future).
- Another aspect that can be gleaned from the aforesaid discussion and the circumstances of the case relates to the disclosure requirement regarding the use of artificial intelligence. Such a requirement has not yet been engraved in our legal system, but it seems that there may certainly be logic in establishing such a duty, at least in certain contexts, where the impact on the citizen may be great. A central reason for this is that such a disclosure may help a little in reducing the power disparities that I discussed above, so that the citizen in whose case a decision has been made using artificial intelligence, will be able to at least be aware of this, which will slightly improve his ability to plan his steps accordingly, examine the correctness of the decision, and perhaps even achieve an increase in it in the appropriate tracks (see and compare: Policy and Regulation Document in the Field of Artificial Intelligence, pp. 63-66, 85-86; For a comprehensive analysis of the issue of disclosure and transparency, in the context of the use of artificial intelligence by authorities, see: Dalit Kan-Dror Feldman, Or Sadan, Racheli Edri-Hulta, and Uri Szold, "Transparency in the Age of Artificial Intelligence: Israeli Law," Mishpat, Society and Culture, 8 (2025)). In any case, it is clear that I am not setting rivets, and these words are said in the form of reflections, with a forward-looking gaze.
- The second level on which practical conclusions can be drawn from the above discussion relates to the scope of judicial review and the issue of remedy. As is well known, the scope of judicial review of the action of an administrative authority, as well as the remedy awarded for a defect that occurred in it, are closely related to the nature and nature of the defect. I am of the opinion that the uncontrolled use of artificial intelligence by the authority in the framework of its direct relationship with the citizen opens the door to stricter and tighter judicial review, and may justify, as a rule, significant remedy. The first reason for this lies in the extreme severity of this conduct, as detailed above. The second reason lies in the fact that, as explained, the uncontrolled use of artificial intelligence increases the suspicion that there were flaws in the decision-making process, and that it suffers from a lack of reasoning, arbitrariness, and other disgrace. A third reason, which derives directly from the analysis of the power disparities between the parties, is found in the considerations of directing behavior. As I have clarified, it is reasonable to assume that in most cases, a person from the community will not be aware that a decision has been made in his case based on an artificial intelligence system, and includes various deceptions and 'hallucinations'; In any case, in cases such as this, the decision will not pass under the category of judicial review, and will remain in place. This means that there is a concern that the authority will not be sufficiently careful to refrain from such conduct, in view of the low probability that the decision will be subject to an external control mechanism (this situation can be thought of as a case of 'lack of deterrence'). Therefore, in order to ensure that administrative authorities adopt proper methods of conduct in this context, and to prevent the uncontrolled use of artificial intelligence in their relations with the citizen in the first place, they must know that their decisions will be subject to meticulous and stringent judicial review, which may also come with significant remedies.
- From the general to the specific: The municipality made uncontrolled – not to mention reckless – use of artificial intelligence, both in its direct relationship with the appellant's father, in response to his request, and in the framework of the legal proceedings. This is on a large scale, without any effective control, and on a very fundamental issue for the appellant and his father – the right to transport the minor, a student in special education, to school. As a result, there were many substantial flaws in the documents on its behalf – both in its responses to the citizen and in court documents it submitted to the court. In these circumstances, and taking into account all the considerations that apply in such a case, as detailed above, I am of the opinion that we must convey a clear and clear message, and impose on the municipality expenses on the high side.
Final Notes
- Before concluding, 3 comments. First, alongside the severity of the municipality's conduct, and the concerns that arise in light of the uncontrolled use of artificial intelligence by the administrative authorities, we must be careful not to 'spill the baby with the water' (see and compare: The Anonymous case, para. 26; "To correct your spine, and not to distort me" (Bavli, Bava Batra 169:2) – to correct (to improve) your spine, and not to distort (to spoil)). The use of artificial intelligence by administrative authorities is likely to bring many benefits to our world, and contribute in a variety of ways to improve and streamline the operation of those authorities, while ultimately improving the service provided to the citizen. The message, then, is not that the authority should refrain from using artificial intelligence systems; The opposite is true. All I am going to say is that this use must be made with any common sense and discretion, out of awareness of the current limitations of technology, and while maintaining the compass, according to which such use is made for the benefit of the citizen, and while safeguarding his rights.
- Second, in many cases we are aware of the knowledge, as I have said in the past, that "the law has lazily followed the innovations of the world, and that the legislation does not keep pace with the pace of the progress of science and its applications" (Appeal Petition/Administrative Claim 3782/12 Commander of the Tel Aviv-Jaffa District of the Israel Police v. Israel Internet Association, para. 23 [Nevo] (March 24, 2013)). This is especially relevant when it comes to artificial intelligence – a technology that is developing at a record rate, has a significant impact on our world, and may bring about far-reaching changes. This technology, in view of the power and breadth of its effects, already raises considerable challenges on the legal level, and is expected to continue to present us with new, increasingly complex challenges. The relevant authorities – both the legislative and the executive branches – would do well to address these challenges, especially those that are more urgent (such as the issue of the obligations that apply to the administrative authorities while making use of artificial intelligence), and to design, from a broad perspective, the appropriate tools for dealing with those challenges.
- Third, in the response submitted by the municipality, the municipality's attorney argued that "even if there was an unfortunate clerical error in one quote or another in the pleadings," this happened "in light of the [...] interns" (in the sense of "a mistaken student"; on this see, for example, Rav Menashe Klein, Responsa Mishneh Hilkhot 12, paragraph 34). This argument would have been better if it had not been made at all. Even if this is true on the factual level – and for the avoidance of doubt, I do not determine or imply that this is indeed the case – this does not raise or lower the case. It is clear that the responsibility for what is written in the document submitted to the court lies with the lawyer who signed it.
In more general terms, the attempt to place the responsibility for errors in court documents submitted by a lawyer on other parties – whether they are interns who are subordinate to the coaching lawyer or whether they are artificial intelligence technological systems – is a despicable attempt that must be rejected outright. As a professional, the lawyer is personally responsible for the quality of the product he produces, and he does not fulfill his duty by relying on someone else – a human being or a machine intelligence.
- In summary: I would therefore suggest to my colleague that we accept the appeal and oblige the municipality to pay expenses in the sum of NIS 30,000 – which relate both to the proceeding before us and to the proceeding that took place in the District Court, and especially to the administrative proceeding.
I hesitated, until I decided, this time, to refrain from imposing personal expenses on the municipality's attorney, by virtue of Section 151(c) for the regulations; At the very least, I will mention the very existence of this possibility.