I’m sorry Dave, I’m afraid I can’t do that
Articles

I’m sorry Dave, I’m afraid I can’t do that

Adi Marcus, Adv.
July 28, 2024
Print
PDF

In the classic film “2001: A Space Odyssey” (1968), Hal9000, the artificial intelligence system contends: “No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error”, just before it goes on to systematically kill the astronauts. In "Terminator" (1984), the artificial intelligence Skynet wages a war of attrition against the remainder of humanity in an attempt to eliminate them. What if we had a way to ensure that Skynet would never be born or HAL would never be placed on a spaceship? The new European Union regulation regarding artificial intelligence will not necessarily save humanity (and we admit, if it did so, it would make for a particularly boring science fiction movie), but it aims, at least at its core, to allow businesses to identify "dangerous" AI and limit its ability to develop and move freely in the market without Supervision.

The regulation, a draft of which was published back in May, 2024 , and which finally entered into force on August 1, 2024, aims to create ethical and transparent limits on the use of AI and to apply the aforementioned limits in a uniform manner throughout the world, as the EU's privacy regulation (GDPR) affected the entire world. Thus, the regulation is applied not only to every business operating in the field of AI in the EU, but also to every business in the world, which markets or operates products which include AI, throughout the EU.

The regulation takes a "risk assessment" approach that classifies AI systems into 4 levels of risk: unreasonable, high, limited and minimal. Within this classification, systems that allow the operator to utilize AI for non-ethical purposes, for example, are classified as an unreasonable risk. Thus, for example, systems built to usurp people's weaknesses as a result of age or physical limitations, systems that use biometric information to catalog people under religion, race or gender, or systems that collect images from the Internet or from security cameras to create a database that will be used for facial recognition. These actions, which constitute a closed list, are prohibited and the company behind such a system may be subject to fines of up to EUR 35 million or 7% of the company's annual profits, whichever is greater. Classification as a high-risk does not prevent use, but requires recording in the EU's database and placement of barriers, risk management systems and human supervision, all designed to monitor the system and ensure that it does not get out of control or realize the inherent risk. In limited risk systems, the restrictions and requirements are considerably less strict, mainly requiring transparency and disclosure. Minimal Risk systems, seen as more general AI systems capable of a large number of purposes, are left without reference and subjects them to other local or European legislation.

The issue is, of’course, that AI systems, by their definition, are learning and evolving systems. There is a high probability that the unreasonable risk closed list will be significantly lacking in a few years when technological development will allow and flood essential ethical issues that the law does not anticipate. In addition, the definition is essentially amorphous, in a manner that may make it difficult for companies to correctly classify the systems operated by them. Moreover, the classification of the developed system cannot always be expected from the beginning. A company may find itself developing an AI system and investing considerable funds and resources only to find out at the end of the road that the developed system is now defined as of unreasonable risk (if only because it can perform some of the things listed in the regulation, even if secondary to the real system goals) and therefore cannot operate or is required to invest more funds and resources in adding barriers and inspection procedures due to the classification as a high risk system.

The law is not intended to fully apply immediately and sets a schedule of 5-6 years for the application. e.g., the transparency requirements are implemented in the first year, but operative requirements of human supervision for high risk systems received a two or more years grace. Nevertheless, this is a rigid schedule and complicated and sensitive definitions, which require every company in the field to promptly carry out a careful legal and technical review of risk assessment and appropriate future planning, in order to ensure compliance with the requirements when the time comes and it is vital that this be done in cooperation with lawyers who understand the field of technology and the new regulation.