Chronicles of Amber and new EU AI Act
Articles

Chronicles of Amber and new EU AI Act

Ella Rosenberg

Written by

Ella Rosenberg
July 8, 2023
Print
PDF

In the second part of the Chronicles of Amber book series (starting with the book "Trumps of Doom" from 1985) by Roger Zelazny, one of the greatest fantasy and science fiction writers, the protagonist develops "Ghostwheel" - an artificial intelligence system capable of self-learning, which at some point also develops self-consciousness, and an impressive ability to protect itself from the hero's attempts to shut it down. While the Ghostwheel dialogues are entertaining and well-scripted, the film "Terminator" starring Arnold Schwarzenegger, which was released a year earlier, deals with an apocalyptic future in which artificial intelligence takes over the world with in a slightly less cheerful demeanor.

About 25 years later, the ChatGPT system was launched by OPEN AI - a computerized correspondence mechanism that when prompted returns a detailed response based on huge amounts of network data along with the ability to "produce" new information. The system raises many questions, ethical and legal, regarding the nature of the data it provides, plagiarism, and apprehension of creating unfounded data by drawing conclusions based on partial information.

This issue was dealt with in June, 2023, by legislation of the EU Parliament that regulates the use of artificial intelligence and applies to both governmental entities and commercial companies. The legislation defines between three levels of risk: unacceptable, high and limited. Unacceptable risk, which renders the system prohibited, is systems that may affect human behavior, such as games that may encourage risky behavior in children or social ranking based on behavior, socioeconomic status or personality traits. High risk is defined as biometric systems, law enforcement, and critical infrastructures, which systems required a regulatory license. Other systems are defined as limited risk and only requires compliance with standards to ensure transparency and the possibility for the users to make an informed decision.
To try and prevent the use of information produced by artificial intelligence in order to spread false data or the rewrite real information into inaccurate information, the new legislation defines ChatGPT and its equivalents as a separate category of productive artificial intelligence. In such systems it is required: a disclaimer that the information is created by an artificial intelligence system, system settings must prevent the creation of false data, and other prerequisites. The more complex part of the legislation is not necessarily its application to companies engaged in the development of productive artificial intelligence, but rather to companies that use such products (made by others) for the marketing and operation of their systems. According to the new legislation, this too requires a regulatory licensing in the destination country in the European Union.

Thus, for example, financial institutions that use artificial intelligence to catalog customers under risk groups, scraping companies that process the information collected using artificial intelligence, and even biotech companies that try to predict diseases or alert using artificial intelligence about medical deterioration - all of these now need to comply with European standards if they operate in Europe or allowing access to their systems from Europe. Therefore, it is important for any such company to receive appropriate legal advice from a law firm specializing in European legislation in general and in the field of technology in particular.