Liability for damage caused using artificial intelligence technologies

Outline of approaches to the application of legal liability for damage caused by the use of artificial intelligence technologies. Analysis of the state of legal regulation of liability for harm caused by the use of artificial intelligence technologies.

Рубрика Государство и право
Вид статья
Язык английский
Дата добавления 19.08.2022
Размер файла 31,3 K

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Размещено на http://www.allbest.ru/

Liability for damage caused using artificial intelligence technologies

Roman А. Maydanyk

National Academy of Legal Sciences of Ukraine Kharkiv, Ukraine

Department of Civil Law Taras Shevchenko National University of Kyiv Kyiv, Ukraine

Nataliia І. Maydanyk

Department of Civil and Labour Law Vadym Hetman National Economic University of Kyiv Kyiv, Ukraine

Maryna M. Velykanova

Department of Research on Communication Problems between the State and Civil Society Kyiv Regional Center of the National Academy of Legal Sciences of Ukraine Kyiv, Ukraine

Відповідальність за шкоду, завдану використанням технологій штучного інтелекту

Роман Андрійович Майданик

Національна академія правових наук України

Харків, Україна

Кафедра цивільного права

Київський національний університет імені Тараса Шевченка Київ, Україна

Наталія Іванівна Майданик

Кафедра цивільного та трудового права

Київський національний економічний університет імені Вадима Гетьмана Київ, Україна

Анотація

Технології штучного інтелекту, що останнім часом набувають стрімкого розвитку, поряд із беззаперечними перевагами створюють і низку небезпек, реалізація яких спричиняє завдання шкоди. Відшкодування такої шкоди викликає запитання щодо суб'єктів, власне діяння, яким шкоду було завдано, причинно-наслідкового зв'язку тощо. Ускладнює ситуацію і недосконалість нормативного регулювання відносин з використання технологій штучного інтелекту та недостатність чи неоднозначність судової практики з відшкодування шкоди, завданої використанням цифрових технологій. Тому метою цієї публікації є окреслення підходів до застосування юридичної відповідальності за шкоду, спричинену використанням технологій штучного інтелекту. У статті на підставі системного аналізу з використанням діалектичного, синергетичного, порівняльного, логічно- догматичного та інших методів аналізується стан правового регулювання відповідальності за спричинення шкоди використанням технологій штучного інтелекту та обговорюються підходи до застосування юридичної відповідальності за шкоду, завдану використанням цих технологій. Зокрема робиться висновок, що незважаючи на декілька резолюцій, прийнятих Європейським парламентом, відносини з використанням технологій штучного інтелекту та застосування юридичної відповідальності за шкоду, спричинену штучним інтелектом не здобули остаточного нормативного врегулювання. Зараз лише формується нормативна база та створюються правила поведінки у сфері цифрових технологій. Перед державами, у тому числі й Україною, стоїть завдання приведення законодавства у галузі використання технологій штучного інтелекту у відповідність до міжнародних нормативно-правових актів з метою захисту прав та свобод людини і громадянина та забезпечення відповідних гарантій під час використання таких технологій. Одним із пріоритетних напрямів гармонізації законодавства є вирішення питання режимів юридичної відповідальності за шкоду, спричинену використанням технологій штучного інтелекту. Такими режимами на сьогодні є сувора відповідальність та відповідальність за принципом вини. Втім, здатність того чи іншого режиму виконувати функції стримування та компенсації шкоди, спричиненої використанням технологій штучного інтелекту, спонукає до наукової дискусії artificial intelligence legal liability

Ключові слова: зобов'язання, делікт, електронна особа, цивільне право, ІТтехнології, відшкодування шкоди

Abstract. Artificial intelligence technologies, which have recently been rapidly developing, along with indisputable advantages, also create many dangers, the implementation of which causes harm. Compensation for such damage raises questions regarding the subjects, the act in itself which caused the damage, the causality, etc. The situation is also complicated by the imperfection of statutory regulation of relations on the use of artificial intelligence technologies and the insufficiency or ambiguity ofjudicial practice on compensation for damage caused using digital technologies. Therefore, the purpose of this publication is to outline approaches to applying legal liability for damage caused using artificial intelligence technologies. Based on a systematic analysis using dialectical, synergetic, comparative, logical-dogmatic, and other methods, the study analysed the state of legal regulation of liability for damage caused using artificial intelligence technologies and discusses approaches to the application of legal liability for damage caused using these technologies. In particular, it was concluded that despite several resolutions adopted by the European Parliament, relations with the use of artificial intelligence technologies and the application of legal liability for damage caused by artificial intelligence have not received a final statutory regulation. The regulatory framework is merely under development and rules of conduct in the field of digital technologies are still being created. States, including Ukraine, are faced with the task of bringing legislation in the field of the use of artificial intelligence technologies in line with international regulations to protect human and civil rights and freedoms and ensure proper guarantees for the use of such technologies. One of the priority areas of harmonisation of legislation is to address the issue of legal liability regimes for damage caused using artificial intelligence technologies. Such regimes today are strict liability and liability based on the principle of guilt. However, the ability of a particular regime to perform the functions of deterring and compensating for damage caused using artificial intelligence technologies encourages scientific discussion

Keywords: obligations, tort, electronic identity, civil law, IT technology, compensation for damages

INTRODUCTION

At present, the problem of liability for damage caused using artificial intelligence technologies is actively discussed among scientists. Quite polar opinions are expressed, and various arguments are given favouring a certain approach. In general, the issue of liability for damage caused by artificial intelligence is placed in the context of the essence of artificial intelligence and determining its place in the structure of legal relations. Thus, there are three main approaches to determining the legal status of artificial intelligence: 1) its perception exclusively as an object of civil relations, which should be subject to the legal regime of things; 2) its perception exclusively as a subject of civil relations, a carrier of subjective rights and obligations, capable of acting independently and realising and evaluating the significance of their actions and the actions of other persons; 3) differentiated determination of the place of robots in the structure of civil relations, where they can be both subjects and objects of civil relations [1]. At the same time, it is suggested that non-autonomous or partially autonomous robots should be considered as tools used by subjects of legal relations - the manufacturer, owner, software developer, user, state authority, military chief, etc. Accordingly, legal liability for causing losses or other negative consequences should be assigned proportionally to the developers of robots, their owners and users. However, the issue of legal liability of autonomous robots is undetermined, that is, they virtually cannot be held accountable for themselves and for actions or inaction by which they damage third parties [2, p. 160]. There is also a position that the person responsible for the damage caused by artificial intelligence (hereinafter referred to as “AI”) should be identified based on who caused the action or inaction of AI, which ultimately caused damage, and the level of autonomy of AI [3, p. 194-195].

Therefore, the solution of the issue of liability for damage caused using AI technologies is impossible without establishing the essence ofAI because, as O.V. Kokhanovska notes: “the explanation of the phenomenon of virtuality is important from the standpoint of finding the correct legal approaches to solving issues of guilt and legal liability, protection of rights when it comes to damage caused by a person - a living being and an automaton created by it, a robot or artificial intelligence” [4, p. 147]. However, science has not yet developed a unified approach to understanding the essence of AI. It can only be stated that AI is perceived at least in the following meanings: 1) “weak artificial intelligence” - AI focused on solving one or more tasks that a person performs or can perform; 2) “strong artificial intelligence” - AI focused on solving all tasks that a person performs or can perform; 3) “artificial superintelligence” - AI that is much smarter than the best human intelligence in almost every field, including scientific creativity, general wisdom and social skills, which can have consciousness and subjective experiences [5]. And the role that AI plays is blurring borders, democratising experience, automating work, and distributing resources [6].

In April 2018, within the framework of the EU strategy for AI development, a group of high-level experts designed seven key requirements for AI: 1) mediation and supervision of human activities: AI systems should ensure a fair society, supporting freedom of human rights and fundamental rights, and not reduce or restrict the human right to make decisions; 2) reliability and security: algorithms should be stable, reliable, and sufficient to eliminate errors or inconsistencies during all phases of the life cycle of AI systems; 3) confidentiality and data management: citizens should have full control over their data, while data relating to them should not be used to damage or discriminate against them; 4) transparency: AI systems should be traceable; 5) diversity, non-discrimination, and fairness: AI systems should consider the full range of human abilities, skills, and requirements and ensure accessibility; 6) social and environmental well-being: AI systems should be used to enhance positive social change and increase environmental responsibility; 7) accountability: mechanisms should be put in place to ensure responsibility for AI systems and their results Artificial intelligence: Commission takes forward its work on ethics guidelines. (2019, April). Retrieved from https://ec.europa.eu/ commission/presscorner/detail/en/IP_19_1893..

Given the ambiguity of the interpretation of the essence of AI, and as a result, the lack of a unified concept of liability for damage caused using AI technologies, there is a need to discuss the issue of legal liability for causing damage to AI. Therefore, the purpose of this publication is to outline approaches to applying legal liability for damage caused using artificial intelligence technologies.

1. MATERIALS AND METHODS

The understanding and application of law should be based on a socially determined approach to perform the social functions assigned to it. The improvement of the law should ideally be subject to a common sense of justice and based on a combination of different interests, both private and the entire society. The system of modern Ukrainian law to a considerable extent uses the methodology of Soviet law, which is objectively not designed to serve the state and liberal law. Consequently, the departure from positivism, dogmatism, and ideologisation creates the need, on the one hand, to critically approach the previously used methods of cognition and transformation of reality, and on the other hand, to develop innovative approaches to the application of the principles of building and organising theoretical and practical activities.

The principles, techniques, means, and methods of research are determined by the essence of the phenomena and processes under study. The study of artificial intelligence and relations on the use of digital technologies should consider the comparative novelty of such an object of research activity, and therefore use both long-known and widely used techniques, and new methodological tools that are not familiar to legal science. Therefore, when studying legal liability for damage caused using artificial intelligence technologies, it was advisable to use such general techniques as dialectical, Aristotelian, synergetic, comparative, as well as the logical-dogmatic method of interpreting law as a special method of scientific cognition, including the method of hermeneutics.

The dialectical method of research, which allows analysing various social phenomena in their development, was used to study the contradictions of approaches to the application of a particular regime of legal liability for damage caused using artificial intelligence technologies and to establish causality between the understanding of the essence of artificial intelligence and legal liability regimes. Synergetics, considering development as self-development of complex systems, proves that each such system has not a single line of development, but many such lines. Therefore, the use of the synergistic method allowed considering not a single line of development of responsibility relations, but also to conduct research factoring in their multidimensional nature.

Using the Aristotelian method, judgements regarding the application of legal liability for damage caused using artificial intelligence technologies were justified by provisions based on proven theories. And the use of the comparative method allowed analysing and identifying the contradictions of the proposed approaches to compensation for damage caused using artificial intelligence technologies. The comparative method also allowed studying the state of legal regulation of liability for damage caused using artificial intelligence technologies and concluding on the insufficiency of such statutory regulation and a need for updating national legislation considering current global trends in this area.

The logical-dogmatic method of scientific research helped identify obvious attributes (aspects, characteristics) of legal phenomena without delving into internal essential connections. It was aimed at cognising the dogma of law, which solves the problems of systematisation, interpretation, and application of law, as well as the development of law. However, the change of the worldview, legal understanding and, as a result, of the methodological approaches to the study of legal phenomena, as well as an increase in the fluidity of the dogma of law, require complementing the dogmatic method with the method of hermeneutics, which involves expansion of the researcher's intelligence, feelings, and intuition on the subject of cognition. The main idea of hermeneutics is that the essence of any socio-legal phenomenon can be understood only in the context of the historicity of its existence. Given the above, the use of the logical-dogmatic method along with the method of hermeneutics allowed considering the regimes of legal liability for damage caused using artificial intelligence technologies through the lens of their perception by interstate institutions and researchers. In particular, using such techniques, an attempt was made to interpret the regimes of legal liability through the analysis given in the European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics, European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence1 and scientific literature on conditions of legal liability.

The main stages of the study of legal liability for damage caused using artificial intelligence technologies were as follows: 1) the hypothesis that compensation for damage caused using artificial intelligence technologies should be based on the conventional principles of legal liability - strict liability and liability on the principle of guilt, considering the specific field of activity - digital technologies; 2) analysis of the state of legal regulation of relations on the use of artificial intelligence technologies and liability for damage caused using such technologies; 3) study of doctrinal approaches to the application of legal liability regimes for damage caused using artificial intelligence technologies and the effectiveness of their performance regarding the functions of deterrence and compensation of such damage; 4) formulation of the conclusion that liability for damage caused using artificial intelligence technologies depends on the level of risks of artificial intelligence systems.

2. RESULTS AND DISCUSSION

2.1 Legal regulation of liability for damage caused using artificial intelligence technologies

On October 20, 2020, the European Parliament approved the resolution with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)) European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). (2020, October). Retrieved from https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html. Ibidem, 2020.. The introduction of this resolution states that in order to efficiently exploit the advantages and prevent potential misuses of AI-systems and to avoid regulatory fragmentation in the Union, uniform, principlebased and future-proof legislation across the Union for all AI-systems is crucial. Technology development must not undermine the protection of users from damage that can be caused by devices and systems using AI. The question of liability in cases of harm or damage caused by an AI-system is one of the key aspects to address within this framework.

Prior to the approval of the said 2020 resolution, the regulation of liability for damage caused by AI was partially implemented by the European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). (2017, February). Retrievedfrom https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html7redirect.. In fact, in a 2017 Resolution, the European Parliament, given that the more autonomous robots, the less they can be considered simple tools in the hands of other entities (such as the manufacturer, operator, owner, user, etc.), questioned the sufficiency of conventional liability rules to ensure clarity of the legal liability of various entities regarding liability for actions and inaction of robots, when the cause cannot be traced to a particular human actor and the unresolved question remains whether it was possible to avoid actions or inaction of robots that caused damage. It was suggested that the autonomy of robots raised questions concerning their nature in the light of existing legal categories, or whether a new category should be created with its specific features and consequences. Therewith, it was noted that under the current legal framework, robots by themselves cannot be held liable for actions or inaction that cause damage to third parties; at the same time, the existing liability rules cover cases where the cause of the robot's action or inaction can be traced back to a particular human agent, such as the manufacturer, operator, owner, or user, and whether this agent could have foreseen and avoided harmful behaviour of the robot. Furthermore, manufacturers, operators, owners, or users may be strictly responsible for the robot's actions or inaction. If a robot or AI can make autonomous decisions, conventional rules will not be sufficient to impose legal liability for damage caused by the robot, since they do not allow identifying the party responsible for providing compensation and require the party to compensate for the damage it has caused. In addition, in this resolution, the European Parliament stated a need to design new, effective, and modern rules that should correspond to technological developments and innovations that have recently emerged and are used on the market both in the field of contractual and non-contractual liability, since conventional rules of contractual liability are not applicable, and Directive 85/374/EEC1, which covers non-contractual liability, can only cover damage caused by manufacturing defects of the robot, provided that the injured person can prove the actual damage, product defect and causality between the damage and the defect, so the rules of strict liability or no-fault liability may be insufficient. In paragraph 59 of Resolution 2017, the European Parliament called on the Commission, when conducting an impact assessment of its future legislative tool, to study and analyse the creation of a specific legal status for robots in the long term, so that at least the most complex autonomous robots can be established as having the status of electronic persons responsible for compensation for damage they may cause, and possibly the application of electronic identity to cases where robots make independent decisions or otherwise interact with third parties independently.

In 2020, Policy Department C, at the request of the European Parliament's Committee on Legal Affairs, conducted a study that resulted in recommendations regarding AI civil liability. The study notes that to date, the only possible fundamental and universal reasoning for artificial intelligence systems is that there is no philosophical, technological, or legal grounds to consider them anything other than artefacts generated by human intelligence, and hence products. From an ontological standpoint, all advanced technologies are not subjects, but only objects, and there is no reason to grant them rights and bring them to legal responsibility. Even considering the existing liability standards, it is always theoretically possible to identify a person who can be found responsible for losses caused by using the device. But from a functional standpoint, one can define some conditions under which it is advisable to assign a fictitious form of legal personality to a certain class of applications, as is currently the case with corporations. However, if the concept of electronic personality should be understood as a way to recognise the possibility for a machine to get rights or be burdened with responsibilities, in the light of its internal features - intelligence, the ability to learn and modify itself, autonomy, unpredictability of its result - which cause it to differ from other objects, such a proposal should be ignored and objected to [7, p. 9, 38].

In fact, this approach was developed in the abovementioned 2020 resolution Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products. (1985, July). Retrieved from https://eur-lex.europa.eu/legal-content/EN/ ALL/?uri=celex%3A31985L0374. European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). (2020, October). Retrieved from https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html., which noted no need for a complete review of well-functioning liability regimes, but rather for specific and coordinated adjustments to liability regimes to avoid a situation where persons who suffer damage or whose property is damaged find themselves without compensation. For 30 years, Product Liability Directive Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, op. cit. has proved to be an effective means of obtaining compensation for damage caused by a defective product, but it needs to be revised to adapt it to the digital world and the challenges facing new digital technologies, thus ensuring a prominent level of effective consumer protection, as well as legal certainty for consumers and businesses, while avoiding high costs and risks. However, Product Liability Directive should be updated in parallel with the Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety. (2001, December). Retrieved from https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex%3A32001L0095..

The current civil legislation of Ukraine contains no special provisions that would regulate the issues of liability for damage caused using AI technologies. However, given the rapid development of digital technologies, the urgent need to protect human and civil rights and freedoms, democratic values, as well as provide proper guarantees during the use of such technologies, relations on the use of AI are gradually gaining legal regulation. Thus, on December 2, 2020, the Cabinet of Ministers of Ukraine approved the Concept of Development of Artificial Intelligence in Ukraine Order of the Cabinet of Ministers of Ukraine No. 1556-р “On Approval of the Concept of Development of Artificial Intelligence in Ukraine”. (2020, December). Retrieved from https://zakon.rada.gov.ua/laws/show/1556-2020-%D1%80#Text., where one of the principles of development and use of artificial intelligence technologies, adherence to which fully complies with the principles of the Organisation for Economic Cooperation and Development on artificial intelligence issues, defines the principle of assigning responsibility for their proper functioning to organisations and persons who develop, implement, or use artificial intelligence systems in accordance with these principles. And bringing the legislation in the field of artificial intelligence technologies in line with international regulations is one of the priority areas for implementation of the Concept.

As part of the recodification of civil legislation of Ukraine by the Working Group as one of the areas of updating the statutory array of the Civil Code of Ukraine1 (hereinafter referred to as “the CCU”) it is proposed to develop the provision on the “digital rights” of a person as a type of personal non-property rights that create the possibility of realising interests in the field of digitalisation; more clearly regulate the features of the implementation and protection of personal non-property rights, which are endowed with persons who have special legal statuses (legal modes), in particular, a digital (electronic) person; supplement the current CCU, among other things, with provisions on

1) compensation for damage caused by malicious software;

2) compensation for damage caused by robotics and artificial intelligence [8, p. 18, 51].

Thus, the statutory regulation of relations on the use of artificial intelligence technologies in general and the application of legal liability for damage caused by artificial intelligence in particular is at the stage of development. Several resolutions adopted by the European Parliament established the principles of legal liability and laid down conceptual provisions for compensation for damage caused by artificial intelligence. At the same time, it is the right of any country to supplement the current national legislation in the field of application of artificial intelligence technologies. Such an update of legislation should consider current global trends, including regarding the liability for damage caused by AI, and at the same time allow covering future technological developments, including developments based on free and open-source software, since, as noted in the 2020 resolution Civil Code of Ukraine. (2003, January). Retrieved from https://zakon.rada.gov.ua/laws/show/435-15. European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). (2020, October). Retrieved from https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html., any future-oriented civil liability legal framework should inspire confidence in the safety, reliability, and consistency of products and services, including digital technologies, to strike a balance between effective and fair protection of potential victims of damage, while providing sufficient free opportunities for businesses, especially for small and medium-sized ones, to develop new technologies, goods, or services. This would help build trust and create stability for investment because, ultimately, the goal of any liability system should be to provide legal confidence for all parties, whether it is the manufacturer, operator, victim, or any other third party. Therefore, the authors of this study tried to outline approaches to the application of legal liability for damage caused using artificial intelligence.

2.2 Legal liability for damage caused using artificial intelligence technologies

Liability plays an important dual role in everyday life because on the one hand, it guarantees that a person who has suffered damage or danger of causing it has the right to claim and receive compensation from the party regarding which it is proved that it is liable for such damage or loss, and, on the other hand, it creates economic incentives for individuals and legal entities to prevent damage or losses, or bear the risk of having to pay compensation Ibidem, 2020..

At present, there is a rather ambiguous situation with compensation for damage caused using AI technologies. Scientific publications offer various approaches to bringing to legal responsibility for such damage - from applying the rules of compensation for damage caused by a source of increased danger to assigning responsibility to the AI itself - an electronic person. This diversity of opinions is explained by the comparative novelty of relations on the use of AI, and, as a result, the lack of statutory regulation of these relations and the lack of well-established judicial practice in dispute resolution. Consequently, legal liability for damage caused using AI technologies is on the agenda of the entire legal community. As Virginia Dignum pointed out, tools are needed to integrate moral, social, and legal values with technological development in AI. Responsibility is fundamental to the understanding and research of AI, in particular, in the context of subject composition, for example, who is to blame if a self-driving car harms a pedestrian: a hardware designer; a software developer that allows the car to decide on the path; the power that lets the car on the road; the owner who can personalise the decision-making system in the car according to his or her preferences; the car itself because its behaviour is based on its self-training, or perhaps all these subjects together [9]. According to Mohammad Bashayreh, the basis of the new liability regime should be the distribution of risk as proportionate responsibility, when participants in relations on the use of AI technologies agree to bear the risk of unpredictable AI behaviour [10]. After all, the behaviour of AI systems depends, among other things, on the following factors: 1) data from third-party developers (if any) to train it, some of which may be open to algorithmic choices made by researchers and developers; 2) how users can provide data to the system during its use; 3) how algorithms (some of which may be probabilistic in nature and therefore difficult to evaluate and fully control) are designed to adapt or possibly ignore some input data; 4) how people can make decisions by receiving instructions from AI, in particular behavioural effects, as people become overconfident or change their attitude towards risk when using AI for decision-making [11].

Along with questions on whether AI or an autonomous system will be brought to legal responsibility, whether the person who bears such a legal obligation should be responsible for negligent or criminal actions of AI (the programmer of the algorithm or its inventor/owner), the question of the possibility of AI to file a claim is also raised [12].

According to Jean-Sebastien Boghetti, different liability regimes may apply depending on the circumstances and legal systems. The author makes a general distinction between sector-specific modes and non-specific sector modes. Thus, when AI is used in an area of activity covered by such a specific regime, a sector-specific liability regime is applied, for example, a special (strict) liability regime in case of damage caused by a road accident. Non-specific sector regimes are quite different in this respect. Each country has its rules, but there are two types of regimes that can be found in most (Western) legal systems and the application of which can at least be provided for in case AI causes damage: liability for goods and liability for guilt [13, p. 95]. This refers to strict liability (damage is compensated regardless of guilt) and liability based on the principle of guilt. Next, the authors analysed these liability regimes.

According to the rules of strict liability, a party who causes damage to another compensates for it, regardless of who is at fault. In general, there are doubts in science regarding the capability of strict liability to create effective incentives for subjects to avoid causing damage. Tort law has two main statutory goals: compensation to victims of a tort and deterrence of future tort behaviour [14, p. 443]. Giuseppe Dari-Mathiacci and Francesco Parisi noted that strict liability, as well as lack of liability, cannot provide an effective outcome in terms of stimulating risk reduction. This is explained by the fact that in the system of strict liability, the causer of damage must bear both the costs of preventive measures and the expected amount of damage, and therefore minimise the amount of these costs. This ensures an effective level of precautionary measures, but only for the damage causer. There are no incentives for the victim to prevent damage since they always receive compensation. In the absence of liability, the situation becomes the opposite. The rules of liability, depending on the fault, determine the level of due diligence and verify whether the relevant party has accepted this level of discretion or not. Accordingly, both parties to the legal relations are motivated to take all measures necessary to prevent the damage caused or, if it is impossible to avoid it, reduce the amount of damage caused [15]. According to Emiliano Marchisio, the idea that civil liability should have a deterrent function implies that the obligation to compensate for damages is imposed on the person whom legal systems define as the addressee of such deterrence. This paradigm has remained virtually unchanged over time and has developed two main strategies for distributing liability for damages: liability for guilt and strict liability. The concept of guilt has in some cases been conceptually replaced by the concept of strict liability simply to increase deterrence, even in cases where guilt could not be assessed positively in court, to encourage producers and other professionals to increase investment in security. This approach is supported by scientists, and even complex studies at the supranational level have considered and continue to consider the deterrent function together with the compensation function as the central function of civil liability [16].

Holding liable for damage caused using AI technologies under the rules of compensation for damage caused by a source of increased danger, albeit logical, has its drawbacks. In the previous publication [3], covering legal issues and risks of using AI technologies, it has already been noted that when it comes to compensation for damage caused by a source of increased danger, such damage occurs in the case of using a certain vehicle, mechanism, equipment, which, although they can get out of human control, however, cannot make autonomous decisions. A distinctive feature of AI is its ability to make decisions unassisted. Therefore, this refers not only to the lack of submission to a person's control, but also to the unpredictability of its actions and causing damage. Accordingly, since such harm is unpredictable, its infliction is not covered by the concept of activities that create an increased danger to the environment, in the interpretation of Principles of European Tort Law [3, p. 194-195]. The norms of strict liability oblige an individual or legal entity that uses AI, or on whose behalf AI acts, to bear responsibility for damage inflicted, regardless of whether such behaviour was planned or envisaged [17, p. 385].

In the 2020 Resolution1, the European Parliament noted that, proceeding from the legal challenges that AI systems pose to existing current civil liability regimes, it appears reasonable to establish a general strict liability regime for autonomous high-risk AI systems. Responsibility should be assigned to the operator, regardless of where the operation takes place and whether it is performed physically or virtually. This approach is based on risk assessment, which can cover several levels of risk, and should be based on clear criteria and a suitable definition of high risk and provide legal certainty. At the same time, an AI system poses a substantial risk when its autonomous operation implies a considerable potential for causing damage to one or more individuals randomly and exceeds reasonable expectations. When determining whether an AI system is high-risk, it is also necessary to consider the sector where significant risks can be expected, and the nature of the measures taken. The significance of the risk potentially depends on the interaction between the severity of the possible damage, the probability of causing damage or losses, and how the AI system is used. All high- risk AI systems should be exhaustively listed in the Annex to the proposed Regulation European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). (2020, October). Retrieved from https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html. Annex to the Resolution: Detailed Recommendations for Drawing Up a European Parliament and Council Regulation on Liability for the Operation of Artificial Intelligence-Systems. (2020, October). Retrieved from https://www.europarl.europa.eu/doceo/document/ TA-9-2020-0276 EN.html., which should be reviewed at least once every six months. If all activities, devices, or processes controlled by AI systems that cause damage or create danger are not listed in the Annex to the proposed regulation, compensation for damage should be made in accordance with the rules of liability for guilt. The injured person can at least benefit from the presumption of guilt on the part of the operator, who should be able to justify themselves by proving that they have performed their duty of due care. If several operators cause damage, they must be jointly and severally liable, but each operator will have the right to recover part of the compensation from other operators, proportionately to their liability, provided that the injured person has been provided with full compensation.

Thus, the Resolution 20201 established two modes of liability for damage caused by AI: 1) strict liability for damage caused by high-risk AI systems; 2) liability on the principle of guilt if the damage caused by AI is not classified as high-risk AI systems. Procedural issues of prosecution, as well as issues of the amount of penalties and limitation periods for such types of claims, fall within the competence of Member States pursuant to the Resolution 2020.

According to H. Zech, when the risk cannot yet be determined based on the state of technological knowledge, strict liability can serve as a useful tool for controlling technological risk in the face of uncertainty. This liability regime can also be used as a risk-sharing tool, especially in combination with mandatory liability insurance (third-party insurance). However, like any liability rule, it applies only when it is possible to prove an individual causality [18]. However, as Emiliano Marchisio notes, in the field of AI, the interrelation between cause and effect regarding the causality of damage can be non-linear. Therefore, applying the conventional civil liability paradigm to AI may not considerably improve security and may instead identify negative external effects. This is explained by the fact that compensation for damage to consumers and other end users of AI devices requires, according to the conventional paradigm, that the obligation to pay compensation is imposed on manufacturers and programmers. However, manufacturers and programmers will not be able to forecast the unpredictable “behaviour” of AI algorithms, which will be affected by countless variables provided by databases, Big Data collection, and end users themselves, which are completely beyond the reach and control of anyone. The scientist believes that strict liability should not be applied if an algorithm programmed in accordance with standards sometimes makes mistakes and leads to negative consequences, despite the absence of shortcomings in development or implementation. In these cases, manufacturers and programmers of AI algorithms and devices should be exempt from civil liability for damages. In other words, in all cases where there is no evidence of negligence, carelessness, or ineptitude, and the robot (both in its physical components and in aspects of artificial intelligence) adhered to the production and programming of scientifically proven standards, programmers and manufacturers of AI algorithms and devices should not be held responsible for damages [16]. In this regard, Gyandeep Chaudhary added that the key question regarding AI is whether AI systems offer any solution in a particular scenario, like most expert systems, or whether AI itself makes decisions and acts accordingly, such as an autonomous car. The first case concerns at least one external agent, thereby complicating the proof of causality, while in the latter case, due to the lack of participation of an external agent, suck proof is relatively easy [19, p. 157]. However, the specific feature of AI is that AI systems are specially programmed to interact and change AI based on the wishes of consumers. Therefore, at the time of purchase, the AI programme has only the potential to develop into a dangerous or harmful product responding to the consumer's use in and of itself [20, p. 1213].

As for liability for damage caused by a defective product, established by the Product Liability Directive, pursuant to Article 1 of this Directive, the manufacturer shall be liable for damage caused by a defect in its product European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). (2020, October). Retrieved from https://www.europarl.europa.eu/doceo/document/TA.-9-2020-0276_EN.html. Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products. (1985, July). Retrieved from https://eur-lex.europa.eu/legal-content/EN/ ALL/?uri=celex%3A31985L0374.. “Manufacturer” in the Directive means the manufacturer of the finished product, the manufacturer of any raw material or manufacturer of a component, and any person who, by putting their name, trademark, or other distinctive feature on the product, represents themselves as its manufacturer. Also, a “manufacturer” is any person who imports goods into the community for sale, hiring, leasing, or any form of distribution during their activities, and therefore shall be responsible as a manufacturer. Thus, the “manufacturer” of AI in the sense of the Product Liability Directive will be a manufacturer of the finished product - software, or a design engineer, if the defect is conditioned by the design of this product. In this case, according to Susana Navas, the designer could be personally responsible for the damage caused as the “manufacturer of the component” of the robot [21, p. 81].

In accordance with the position of the European Parliament European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), op. cit. Ibidem, 2020., liability for damage caused using AI technologies should generally be borne by the AI system operator. This is explained by the fact that the operator controls the risk associated with the AI system, similar to the owner of the car. Due to the complexity and interconnectedness of the AI system, the operator will in many cases be the first perceptible person for the victim. The term “operator” covers both an external network operator and an internal network operator if the latter is not covered by the Product Liability Directive*. An external network operator is an individual or legal entity that exercises certain control over the risk associated with the operation and functioning of the AI system and benefits from its operation. An internal network operator is an individual or legal entity who constantly determines the features of the technology, provides data and the main support service of the internal network, and therefore also exercises some control over the risk associated with the operation and functioning of the AI system. Exercising control means any action of the operator that affects the operation of the AI system, and therefore the extent to which it exposes third parties to its potential risks. Such operator actions can affect the operation of the AI system from start to finish, determining inputs, outputs, or results, and whether particular functions or processes in the AI system can change. If there is more than one operator, all operators shall be jointly and severally liable, having the right to proportionate appeal against each other. The proportions of liability should be determined by the appropriate degree of control that operators had over the risk associated with the operation and functioning of the AI system.

However, proceeding from the basic principle of the Product Liability Directive that the manufacturer is liable for losses caused by a defect in the goods that they have put into turnover, the responsibility of the manufacturer is essentially a strict liability, since imposing on the manufacturer the obligation to compensate for damage caused by a defect in the goods does not require proof of their guilt. Therefore, by and large, this still refers to two modes of liability for damage caused using AI technologies - strict liability and liability on the principle of guilt.

Consequently, radically innovative approaches to compensation for damage caused using AI technologies have not yet been observed. At the statutory level, the idea of an “electronic person” as a participant in legal relations and a subject of legal liability was not supported. Legal liability for damage caused by artificial intelligence is based on conventional principles: strict liability and the principle of guilt.

CONCLUSIONS

The rapid development of digital technologies creates new opportunities, but also creates new challenges. Protecting human and civil rights and freedoms, democratic values, and ensuring proper guarantees during the use of such technologies is becoming a priority task of the state. The implementation of this task requires well-thought-out law-making activities and coordination of national legislation with international legislation. Currently, the issue of legal liability for damage caused using artificial intelligence technologies is debatable. The idea of providing artificial intelligence with legal personality and, as a result, recognising it as a liable party, which is quite actively discussedinthe scientific literature,has notbeenconsolidated in regulatory documents. Numerous resolutions, adopted by the European Parliament, consolidate the conventional principles of legal liability and compensation for damage caused using AI technologies: 1) strict liability; 2) liability based on the principle of guilt. The differentiation of such modes is based on the risk assessment of AI systems, and the operator of such systems is determined as the party liable. Therefore, compensation for damage caused using high-risk AI technologies is the operator's obligation, regardless of where the operation takes place and whether it occurs physically or virtually, and whether the operator is guilty of causing the damage. If the damage is caused by devices or processes controlled by AI systems that are not classified as high-risk AI systems, compensation for such damage should be performed per the rules of liability for guilt. In this case, the operator's proof that it has taken all reasonable means to avoid damage will release it from liability for damage caused using AI technologies. At the same time, for subsequent research, it would be interesting to study the issue of liability insurance for damage caused using artificial intelligence technologies, from the standpoint of preventing and/or compensating for such damage.

REFERENCES

[1] Stefanchuk, M.O. (2020). Civil legal personality of individuals and features of its implementation. Kyiv: Artek.


Подобные документы

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.