The responsibility assessment in the scenario of driving AI-based vehicles

Characterization of clarifying the issue of separation of responsibilities by providing an assessment of whether or not the AI system has full autonomy. Legal regulation of liability in the event of an accident and in the case of autonomous driving.

Рубрика Транспорт
Вид статья
Язык английский
Дата добавления 17.05.2024
Размер файла 34,7 K

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Размещено на http://www.allbest.ru/

The responsibility assessment in the scenario of driving ai-based vehicles

Д.А. Булгакова, В.С. Ступнік

Анотація

Автомобілі зі штучним інтелектом (AI) можуть самостійно обробляти дані, але їм потрібна допомога людини, щоб приймати рішення та контролювати поводження на дорозі. У міру розвитку досліджуваних технології людям стає важко передбачити результати AI систем і висновків, що збільшує потенційні ризики, пов'язані з їх роботою під час водіння. Так, дослідження показало, що впровадження штучного інтелекту зробило революцію в автомобільній промисловості, особливо це стосується автомобілів, які для водіння не потребують водія. Незважаючи на те, що таке впровадження є зручним для користувачів, однак є ризики щодо інцидентів на дорозі з негативними наслідками. Зважаючи на вказане, дослідження спрямоване на з'ясування питання щодо відокремлення відповідальності шляхом надання оцінки тому чи має система штучного інтелекту повну автономність чи ні. Адже в разі аварії може виникнути суперечка, хто повинен нести відповідальність.

Таким чином стаття пропонує напрямок для законодавчого врегулювання відповідальності у випадку ДТП та у разі автономного водіння. Тому автори вважають, що відповідальність рекомендована бути пропорційною вині за оцінкою поведінки користувача та розробників автономного автомобіля відповідно, враховуючи- прийняти заходи щодо контролю ризиків. Автори наголошують як на необхідність у ретельному плануванні дизайну та у кінцевому управлінні під час тестування транспортних засобів із влаштованою АІ системою, так і на їх плановій інтеграції у трафік на дорозі. Окрім того, стаття підкреслює невизначеність законодавців у вивченому питанні, тому звертають увагу усіх учасників при поводженні на дорозі про важливість дотримання не лише законів і правил, але й етичного використання автономних транспортних засобів.

Ключові слова: державне регулювання, законодавство, автономні автомобілі, інтелектуальна автоматизація процесу (IPA), алгоритми, допустимий ризик, причинно-наслідковий зв'язок, етика.

Abstract

Bulgakova Daria (Булгакова Дар'я Анатоліївна) - Advocate, Ukrainian National Bar Association Member, Ph.D. in International Law Visiting Scholar, Researcher Uppsala University, Law Department Uppsala Munken, Uppsala, Sweden;

Stupnik Victoriia (Ступнік Вікторія Сергіївна) - Pedagogue-Methodist of the Highest Category Supervisor of Scientific Manuscripts on History and Law, Lecturer Gymnasium No. 91, Kryvyi Rih, Ukraine;

THE RESPONSIBILITY ASSESSMENT IN THE SCENARIO OF DRIVING AI-BASED VEHICLES

Artificial intelligence (AI) machines can process data independently, but they need human assistance to make decisions and control their behaviour. As AI technology progresses, it becomes difficult for humans to predict the outcome of calculations and inferences, which increases the potential risks associated with its operation. The introduction of AI has revolutionised the automotive industry, especially with the development of self-driving cars. While they offer convenient transportation, accidents may occur, and it's necessary to distinguish whether the AI system has autonomy or not, and in the scenario of an accident, it may be controversial who should be held accountable.

Consequently, this article aims to explore the direction of future legislative policy in the event of an accident involving AI with a focus on self-driving cars in the automotive industry. In this regard, the authors propose a solution proportionally to the behaviour assessment of the user and developers respectively, and risk control measures during the development stage underscoring careful planning and management in real-world testing of unmanned vehicles for the integration of AI into daily life. Furthermore, it highlights the uncertainties about appropriate laws to regulate the phenomenon of self-driving vehicles and suggests the importance of complying not only with laws and regulations but also with ethical development and use.

Key words: state regulation, legislation, self-driving cars, intelligent process automation (IPA), algorithms, permissible risk, cause-and-effect relationship, ethics.

Problem statement

As the legal system related to artificial intelligence continues to evolve, it will provide a framework to regulate human behaviour in its development and use, preventing disputes and harmful outcomes. This is like how pesticides protect crops and increase yields while also having the potential to harm human health and the environment. In response, lawmakers have created legal frameworks, such as the pesticide management, the food safety and sanitation management laws, to maximize the benefits of pesticides while mitigating potential harms. Similarly, legislators must continue to refine their understanding of artificial intelligence and create legal norms that maximize its benefits while minimising potential risks.

The development of unmanned vehicles represents a significant breakthrough in human transportation.

Unlike traditional human-controlled vehicles, unmanned vehicles rely on AI systems to control their movement and operation. Currently, the legal systems in many countries are still limited, as traffic regulations are still based on human driving as the starting point, and the establishment of a comprehensive legal system for artificial intelligence is a goal for legislators. The analysis by Salmanova, O. Yu, and A. T. Komziuk [13] shows that the legal regulation and practice of administrative penalties for violations of the rules of stopping, vehicle parking need further improvement, primarily in terms of ensuring the rights of those prosecuted. Unmanned vehicles refer to various means of transportation that operate under remote control or automatic operation, including autonomous technology such as self-driving cars. The purpose of these regulations is to promote the development of unmanned vehicle technology and create a safe environment. To fully realize the potential of unmanned vehicles, real- world testing is necessary. The Commission adopted on 17 May 2018 an EU strategy on automated and connected mobility. As part of the strategy, the Commission announced its intention to work with Member States in 2018 on guidelines to ensure a harmonised approach to the exemption procedure for the EU approval of automated vehicles and administered Guidelines on the Exemption procedure for the EU approval of Automated Vehicles, Version 4.1. The guidelines hereafter have been supported by the Technical Committee on Motor Vehicles of 12 February 2019. According to guidelines Annex I: Information to be Provided by the Vehicle Manufacturer, for the safety assessment and testing, - design and validation process to be validated by the technical service and confirmed by the approval authority: (i) Assessment of the functional and operational safety for the automated system design; (ii) Test of the functionality; (iii) Tests in case of system failure: 1. Measurement equipment used; 2. Test conducted by the technical service/type-approval authority; 3. Description of in-use tests.

This requires careful planning and management, including safety protocols, site management, experimental handling and reporting of all unmanned vehicle types where self-driving cars hold the most promise. While the development of unmanned vehicles is promising, more work is needed to make them a viable option for the daily transportation of all of humanity, and the vision of a world where people can effortlessly travel in self-driving cars remains an unrealised invention. responsibility accident autonomous driving

Material and Methods

The selected theme still has experimental stage and, therefore, should be based on the needs of traffic management and take into account unfavourable grounds such as crowds or traffic tides. Today, self-driving car technology is mature, and it can drive away from the experimental field, run in the streets and alleys, and be safe and sound, traffic regulations such as highway law and road traffic management regulations must be amended accordingly. During the test period of the vehicle, some regulations may be excluded. There is a risk of being punished because of business regulations, or it is known that the relevant regulations do not apply to innovation experiments. Given the inherent unpredictability of technological innovation, certain provisions must be taken into consideration. In principle, during the testing phase, applicants who engage in innovative experiments within the parameters approved by the competent authority may do so without violating applicable laws, regulations, orders, or administrative rules, unless expressly excluded by the approval decision. In such cases, the competent authority is responsible for notifying the applicant of the exclusion. The rules that may be excluded from application include road traffic management punishment regulations, highway law, civil aviation, ship law, telecommunications, and other related laws and regulations, as well as laws related to the research and development and application of unmanned vehicle technology.

In the view of Bohm, Felicia, and Klara Hager [1], it is not possible to introduce completely self-driving cars with respect to today's laws and traffic rules. The European Union is responsible for establishing fellowship rules, and the United Nation Economic Commission for Europe (UNECE) establishes the technical requirements for vehicles. At the national level the Swedish Transport Agency, the Transport Department, and local authorities develop the design of future infrastructure. These actors study the social benefits of autonomous driving in terms of performance, robustness, urban development, environment and health, usability, and safety. Stakeholders in Sweden highlight that road safety will increase when reducing human errors through the introduction of autonomous vehicles. What requirements, regulations, and other policy instruments need to be changed are essential aspects to make implementation possible, and the Transport Agency needs to continue to increase its knowledge and participation in the matter. There are laws saying that someone must be responsible for the safety of the vehicle. The Road Traffic Convention (also known as Vienna Convention on Road Traffic) demands that all vehicles should have a driver and that the driver at all times should be able to take control over the vehicle. The regulations also state that vehicles must not be driven by a person who, because of illness, the influence of alcohol or other drugs, exhaustion, or other reasons, cannot drive a vehicle safely. If an accident occurs, due to negligence or by intention, it is the driver who is responsible and brought to justice. Autonomous cars still require a human's observation, and the difficulty of getting full attention back from a distracted driver is an issue when the driver has the opportunity to relax and not be aware of the situation on the road, which might result in decreased road safety.

However, it should be noted that not all rulings may be excluded. Provisions related to money laundering prevention, terrorism prevention, and related laws cannot be ignored, and civil and criminal responsibilities that may arise from experimentation cannot be ruled out. Nevertheless, such provisions may be too broad and leave room for further review.

The issue with the previous statement is that it fails to consider the different types of unmanned vehicles, such as remote control, autonomous, and manual operation. These types of vehicles differ in essence, one being human-driven and the other being unmanned. Remote-controlled unmanned vehicles are still driven by humans, but the human driver is not physically present in the vehicle. If a remote-controlled non-self-driving unmanned vehicle experiences a fatal accident during an experiment, the fault lies with the human driver at the remote end who violated their duty of care. If a causal relationship is established between their behaviour and the outcome of death or injury, the human driver may be held accountable. On the other hand, unmanned vehicles are operated by autonomous systems without human drivers, making them self-driving transport. If a fatal accident occurs during an experiment with autonomous unmanned vehicles, it is impossible to hold a human driver responsible as there is no human driver present. However, it is important to note that during these experiments, the driver's seat is typically still present on the vehicle and staff for safety purposes. If the AI operating the vehicle makes an error during the experimental period, the researcher present can take immediate control to avoid accidents. It is essential to understand that this person is not a driver but rather a researcher.

Furthermore, the developer must comply with all legal requirements and obtain approval from the competent authority for the unmanned vehicle experiments, including compliance with legal norms. For the competent authority, the authors suggest also assessing whether the incorporation of regular software updates in vehicles can lead to the emergence of problems. One question that arises in this context is whether such updates automatically classify the entire autonomous vehicle as a new product, regardless of the nature of the updated software. The European Commission stresses in its Notice 2016/C 272/01 "The "Blue Guide" on the implementation of EU products rules 2016" a product, which has been subject to important changes or overhauls aiming to modify its original performance, purpose, or type may be considered as a new product. Products that have been repaired or exchanged, without changing the original performance, purpose, or type cannot be considered new products. Also, the Commission stipulates that software updates or repairs could be assimilated into maintenance operations if they do not modify a product already placed on the market in such a way that compliance with the applicable requirements may be affected. In order for a software update to not result in a new product being put into circulation, the updated vehicle must not undergo any modifications that require a full conformity assessment to assess the product's risk profile and ensure the safety of individuals and properties. Whether or not an autonomous vehicle is considered new after a software update depends on whether the update altered the vehicle's "traffic behaviour" to a significant extent. Thus, according to De Bruyne, Jan, and Jarich Werbrouck [5], on the one hand, the self-driving car will indeed be considered a new product after a software update, a new expiry term of ten years will start from the moment the autonomous vehicle is put into circulation again, namely after the moment when the software is installed. This new expiry term also applies to parts of the vehicle that were already put into circulation before the software update but that are "re-put" into circulation as part of the new vehicle as a whole. Imagine the following situation. I buy an autonomous vehicle in 2018. The brakes are part of the vehicle and are thus already put into circulation by that moment. In 2027, the software is updated, and a new ten years-expiry term arises for the updated autonomous vehicle. In 2036, the damage is caused by a defect in the brakes, which have not been changed since 2018. Therefore, the producer compensates the damage caused by defective brakes eighteen years after they were initially put into circulation, without having been changed or modified since then, because the vehicle is considered a new product put into circulation after the software update. This undermines the effectiveness of the ten years-expiry terms as the producer can still be held liable for a defect in his product that has been existing for more than ten years. On the other hand, one can depart from the assumption that the updated autonomous vehicle will not be considered a new product put into circulation. Yet, even in that hypothesis, problems can arise. Ten years after the original product was put into circulation - the self-driving vehicle in 2018 - a liability vacuum is at risk of occurring. As opposed to many products such as bottles, cell phones, or laptops, autonomous vehicles will probably be used longer than ten years. Deciding otherwise would mean a throwback compared to today's average car age, which is approximately twelve to fifteen years.

Hence, the developer cannot be held responsible for accidents during the experiment. It is argued that autonomous intelligence is a rational choice of human beings and a trend in human civilisation. This does not mean hypocrisy, but rather that developers must still abide by relevant laws and regulations and bear legal liability if they fail to do so. It is essential to maintain this bottom line of independent innovations. On the other side, the implementation of autonomous systems can be highly unpredictable, making developers worried about being blamed or even facing legal consequences for accidents or injuries. To alleviate these concerns, it is necessary to focus on the preparation of relevant laws and regulations and the improvement of the assessment capabilities of competent authorities. In terms of legislative measures, it is important to distinguish between unmanned vehicle experiments with remote control and automatic operation. Specifically, during the experimental period, the developer should not bear responsibility unless there is a violation of the approved experimental plan. It means, for experiments, developers who comply with the law should not be held accountable for accidents or injuries that occur. However, in the case of violations of the approved experimental plan, administrative responsibility should be imposed within a reasonable limit. The AI prosperity depends on the encouragement of talent and funding through relevant laws and regulations, leading to the creation of job opportunities and output value. Technology products, such as self-driving cars, can greatly benefit society when they have a high penetration rate and a mature environment. This lays the foundation for the integration of autonomous intelligence into daily life and morality consideration.

Moral responsibility may exist even if there is no legal responsibility. The establishment of legal responsibility can affect our moral evaluation of something. For example, if an action is both immoral and illegal, the legal responsibility reinforces our judgment that the action is immoral. However, the establishment of legal responsibility does not necessarily depend on the establishment of moral responsibility. For an action to be legally culpable, it must also be morally reprehensible. If an action is morally permissible, more reasons should be given to establishing legal responsibility. Firstly, civil disobedience is a good example. At first glance, it may seem no different from other illegal acts, such as blocking traffic. However, what makes an action civil disobedience is that it has some moral legitimacy or reasons to support it, which raises questions about whether severe legal punishment is appropriate. Therefore, the legal treatment of civil disobedience must be more rigorous and careful. Of course, this is a complex and debatable issue, with many opposing viewpoints at both the abstract and concrete levels. Secondly, the problem is not only that a machine, having no consciousness, cannot feel responsibility-to, cannot recognize a morally relevant relation, and cannot recognize others as others, but also that humans will perceive the car and its actions as "machine" actions, that is, they will not at all recognize that car and its machine driver as "other" [3]. This means that, in the case of contemporary cars, they already feel less responsibility- to, and in the case of self-driving cars, the condition for relational responsibility is entirely lacking (ibid.). Unless the car is perceived as other, human drivers who encounter the machine car will be unable to relate to it in a morally relevant way, and social-relational autonomy cannot get off the ground (ibid.).

Can factors like morality affect the decision of selfdriving cars? From a moral standpoint, the factors that influence decision-making can be seen in examples like the trolley problem (The Mercedes-Benz Group holds responsibility for advanced assistance systems. In 2016, the manager, Christoph von Hugo, became the first to express concerns regarding unmanned driving. In the event of an accident, the safety of the driver and passengers takes priority, followed by pedestrians (Morris David Z. Mercedes-Benz's Self-driving Cars Would Choose Passenger Lives Over Bystanders. FORTUNE 15 October 2016). In 2020 Gill Tripat tested and verified the key phenomenon of moral decisions and judgments. As in study 1, participants were more willing to choose harm to a pedestrian (hypothesis 1a) and considered this action more appropriate (hypothesis 1b) with autonomous vehicles (AV) as compared to when they were the agent in control. Moreover, as proposed in hypothesis 1c, this effect was mediated by the perceived lower responsibility for the consequences of the actions to oneself versus the car. Note that while the majority of participants still chose swerve (and avoided harm to the pedestrian), the odds of choosing stay (harm to the pedestrian) were about four times higher in the AV as compared to the self as agent conditions [7].

Different situations can alter the moral priority of decision-making. Legally speaking, if we assume a most basic situation, a self-driving car may choose to hit or turn to kill, which, in itself, may be morally neutral behaviour. In this scenario, when faced with two options, both resulting in harm, the decision-maker must determine which action takes priority. In other words, develop algorithmic understanding of the consequences of the chosen action. For instance, in the case of a tram, if it were to run over five people going straight and hit and kill one person on the right, the driver may choose to hit the student trespassing on the track, even if the student is innocent, to prevent harming more people nearby. This decision, while morally questionable, may be deemed necessary to preserve the group's greater good or interests. However, in cases where there are no additional conditions set at the beginning, legal penalties must be supported by a stronger reason. Punishing a decisionmaker for making a moral mistake, such as choosing option B over A, is not sufficient justification. In practice, the system of the car factory may add multiple filters to complicate the judgment of the moral attributes of a decision. As such, it is essential to pay close attention to how these factors affect decision-making in the realm of driving.

Practically talking, the self-driving car's sensor design is becoming increasingly sensitive with, for instance, three types of sensors: camera, lidar, and radar. According to the California PATH Research Report, sensing consists of gathering information about the external environment and the internal system to build a model, called a world model, that represents and describes the vehicle, its surroundings, and the relationships between them. Depending on the system function, this model may include the external environment, such as road conditions, weather, and traffic; vehicle-performance characteristics, such as velocity, heading, and tire pressure; and even the behaviour of vehicle occupants, such as the driver's eye movements, seat-belt use, and passenger weight distribution. In a fully autonomous vehicle, all of this information may need to be incorporated simultaneously into a complete world model. There are challenges associated with sense, particularly for systems that perform complex or multiple functions. First, individual sensors are limited in what they can detect or measure and depend on favourable environmental conditions. Consequently, systems may need to have many different sensors to gather all the required information and to provide redundancy and increased reliability. These sensors generate a tremendous amount of data per second. Second, the vehicle must be able to process the data fast enough to avoid a backlog of information. Third, some of the data will be good data (e.g., colour information from cameras in the day), and some not (e.g., colour information from cameras at night), and the system must be able to recognize the difference. Fourth, data from different sensors or gathered at different times may conflict; algorithms must reconcile contradictions in the data and, in the end, create a complete model that is accurate enough to enable the vehicle to drive safely and efficiently [9].

Lidar can provide a specific distance and image, but what if the pedestrian could communicate with the machine or wearables could communicate with the machine? The self-driving machine could go to the sensor to find important characteristics of the pedestrian, such as thickness or gender. While judging images may not be a problem now, the initial judgment can lead to accidents. In other words, a false positive judgment led to a misjudgement, causing an accident. As more characteristics are considered, we must consider if they change the moral importance of decisions A and B. This is where collision ethics come in. We cannot ignore characteristics such as gender, age, or wearables and how they affect the moral permissibility of turning or going straight. We do not know how the collision ethics system was designed, including default rules and priority rules. As wearable devices become more prevalent, we must consider their impact on decision-making. Overall, we need to think carefully about the implications of collision ethics and the impact of different characteristics on decision-making in self-driving cars.

While legal practitioners may focus more on legal responsibility, the law assumes that behaviours deemed criminal are also morally reprehensible. However, in cases such as the trolley problem, where the behaviour may not necessarily be morally reproachable but still requires legal punishment, other reasons must be given to justify the punishment. Therefore, when considering punishment for actions in the trolley problem, careful thought must be given to the reasons for punishing the tram or the manufacturer of the tram. In one scenario, if the manufacturer of the tram is to be held responsible, it's possible that they included ethical rules or features in the collision process that are morally unacceptable. For instance, we may not allow annual income to be a consideration in collisions as it prioritizes one life over another based-on income. If the manufacturer were to include income as a factor, it would be morally reprehensible as it creates a priority index that is not justified. However, if the decision-making process uses features like whether a person is pregnant, determined by a computer system, it may be morally acceptable. Some features are not necessarily justified but are morally acceptable and unacceptable. But what if the place of residence is used? For example, wearable devices and zip codes can determine whether a person is from a high or low-income area, and the tram's collision decision is made accordingly. We may feel that this is morally wrong and should not be done. In this case, if there is a legal penalty, we need to examine the elements that the manufacturer included in the design that do not meet our moral standards. It's not just about punishing the manufacturer because an accident happened, but it's about the ethical considerations that went into the design. In other words, the designer of the system may be making decisions about how to prioritize human life, and this may involve ranking and ordering. However, the issue is not necessarily whether human life can be measured or valued, but rather whether the factors used to prioritize human life are morally justifiable. Designers are likely considering what factors to include in the system, but the justification for these factors is a major concern. Currently, it is unclear what factors are being used, but it is important to consider that designers may be aiming to create machines that can make decisions for us. Therefore, the focus should be on ensuring that the factors used in the system are morally justifiable.

Results and Discussion. The autonomy of the car is most obvious in the face of road emergencies. In other words, when a self-driving car is running, if it encounters an unexpected situation on the road, it should brake urgently, turn right, or turn left, relying on AI to perform calculations according to the current situation (data). Thus, when the self-driving car faces an unexpected road situation, whether to brake or turn, it is no longer predictable by human beings but is independently determined by the AI system. The embodiment of intelligence in vehicles is still automatic, or partially autonomous, yet, in any scenario users should still pay attention in the event of death, injury, or other levels of wrongdoing (see [16]). Commonly, the behaviour of the user is the object of administrative evaluation. For example, when a vehicle equipped with a driver assistance system, possibly by an automatic cruise control system, or partial autonomy adaptive (active) cruise control system, but if drivers do not pay attention to the situation in front of the car and cause a car accident - should be held responsible. Salvendy Gavriel and June Wei found the coordination relationship between the longitudinal acceleration and the lateral motion of the vehicle according to the steering behaviour of experienced drivers and vehicle movement state. Based on this coordination relationship, a human-computer driving control system reduces the difficulty of driving in curves and assists the driver to control the longitudinal acceleration according to the driver's steering operation. By comparing acceleration changes and steering angles with or without a cooperative control system, the feasibility and effectiveness of the control system for reducing the difficulty of driving in curves are confirmed [14].

On the other hand, when a car is equipped with an autonomous driving system, controlled by the configuration, the AI is the driver, and humans in the car are the passengers, who, in principle, do not need to pay attention to the situation in front of the car. And, if a car accident occurs, how to be held accountable may become a problem.

In principle, the responsibility of the driver depends on his behaviour and whether he bears the guilt of intentional or negligent; but if it is the AI system that caused the tragedy, it is confusing how to rule the situation due to a considerable degree of autonomy (See Okuhama Masaki. Replacement Driver Service Agent Retrieval System and Replacement Driver Service Agent Retrieval Program, 2013). Therefore, when the operation of the autonomous system causes death, injury, or other related act, who is responsible for the illegal infringement remains still controversial. In criminal law, when AI could bear criminal responsibility and punishments is a relatively new view. Looking ahead when a fully autonomous system emerges, it may face this practical situation, but the current technology in the frame of criminal law theory still seems to be appropriate to the criminal object of legal evaluation and is placed on human behavior with the essence of wrongfulness. In the scenario of a self-driving car an incident, what may be evaluated is the behaviour of the user or developer.

The authors of this article suggest using the term "user" instead of "driver" when referring to individuals concerning autonomous self-driving cars. The reason behind this is that in a fully automated self-driving car, there may not be a traditional "driver" in the sense that we currently understand it. Instead, the person using the self-driving car should be viewed as a user of an object that they legally own and use for transportation purposes. If someone is using an autonomous taxi or another transportation service, then their status as a user does not change and is viewed in the context of a passenger. The car's sensors and other technologies will monitor the driving environment and make all necessary driving decisions without human input. However, responsibility for any accidents or incidents involving the car will be evaluated based on the behaviour of the parties involved and their eligibility in relation to the self-driving car. In cases where human interaction is required, such as programming a destination or adjusting the climate control or entertainment system, responsibility will be based on the extent of the user's given rights to the selfdriving car. In emergencies where the automated system needs to be overridden, the user will also be responsible for any actions taken. Regardless, the authors suggest that the term "user" should be used to reflect the changing status of individuals to autonomous self-driving cars. This approach acknowledges the fact that these vehicles represent a new paradigm in transportation and that traditional roles like "driver" may no longer be appropriate.

Contrary, Volvo presented information about the car's functions, apps, settings, and user profiles, but it does not change the fact that the driver is still the user of the vehicle. The status field displays the active user profile, network and connection information, and the clock. The user profiles allow the driver to personalize the car's settings and functions according to their preferences. The centre display views are designed to provide the driver with easy access to information and functions that can enhance their driving experience, but they still do not replace the driver's role as the user of the vehicle. Volvo. Centre display views, updated 18.10.2022

Hence, behaviour related to the operation of an autonomous system should be the user's behaviour and the behaviour of developers. The user behaviour is relatively simple because it is autonomous as far as the AI system is concerned, the user's behaviour is mainly to start the system; when the system starts, based on its autonomy, unless the user has a special demand, right or stops the system, otherwise, the entire operation of each system is the independent judgment by AI. In contrast, developers' behaviour or intent is much more complicated, because the development of self-driving vehicles is not a single industry, but it involves the industrial chain operated by many enterprises and includes experiments on the operation appropriately.

Behavioural evaluation of the user

Under the research, when the operation of an autonomous system involves administrative lawlessness, since the system is “activated” by the user, the user's activation behaviour may be affected by administrative evaluation. Regarding that in contrast to the criminal law, in principle, there is no crime because when the user employs AI correctly, the operation of achieving the results of the constituent elements is subjectively not intentional and unforeseen. As far as possible, there is no duty of care, so there is no negligence. It can be argued that there is no direct cause-and-effect relationship between a user's actions and enlightenment. In other words, the absence of a particular action does not necessarily mean that enlightenment cannot be achieved.

In the view of the authors, although the activation behaviour of the user may appear to cause an infringement, - in reality, the operation of the autonomous system is determined by its algorithm. Even if the constituent elements produce a result, it is ultimately linked to the AI, and the user is not responsible for the startup behaviour. This concept is akin to interrupted causality. Consider a scenario where users in a self-driving car start the vehicle and then engage in leisure activities like reading or watching movies. In such a case, under the study, the AI system is responsible for driving, and users are not held accountable if an accident occurs due to the trust and safety conformity of the selfdriving car (product). Therefore, from the perspective of a significant causal relationship between user behaviour and autonomous artificial intelligence, it is challenging to establish a corresponding relationship between the constituent elements and their results. Nonetheless, after activating the autonomous AI, users expect safe and smooth operation. Failure to meet this expectation would make it difficult for AI products to gain market traction. Especially without “value adding result” because that is ultimately the job the user is hiring the product to do - companies don't hire Google Analytics to collect and show their website data, but rather to get actionable insights on what they can do better [11]. Therefore, based on the effect relationship and objective attribution test, it can be argued that since the AI has been legally placed on the market, the user's activation behaviour does not create any risks that are not permitted by law, and thus, the user should not be held accountable.

On the other side of users' fault, users can advocate their actions based on the principle of reliance, and the principle of trust. M. Konig and L. Neumayr [8] describe the results of a study on people's knowledge and attitudes toward self-driving cars. The study found that most participants had heard of self-driving cars, but their level of knowledge was limited. Overall, people tended to have positive attitudes toward self-driving cars, although women, rural residents, and older participants were less positive. People who used their cars more often were also less positive towards self-driving cars. Young people were more open to the idea of driverless cars than older people. While people were open to the idea of riding in a self-driving car, they were less interested in buying one, and car-sharing schemes were more appealing. Control over the vehicle was a concern for many, with the option to take over in emergencies being highly desired. In legal theory, the punishment for negligence is restricted when the perpetrator has fulfilled their duty of care. This is especially relevant in modern societies where labour is highly specialised. If an individual has taken all reasonable precautions, according to the study, they should not be held responsible for any harm caused by their behaviour. When it comes to using smart systems, users have a duty of care as long as the system is legal and compliant. However, it can be challenging for ordinary users to detect any abnormalities in AI driving, and they must rely on legal, factory-made products, follow regulations, and depend on extension success factors. Eggers, Felix, and Fabian Eggers [6] find that the importance of brand extension success factors differs between parent-brand categories. When renting a selfdriving car from a technology brand, consumers rely more on experience and capability ft. This could be affected by the process of renting a self-driving car, which likely will involve a software with payment capabilities that the consumers might have experienced by these companies and consider them capable of. This notion is supported by the result that experience with an automobile brand is increased when buying a car. This experience might originate from showrooms or test drives at a dealership which plays a more important role in the purchase context.

Conversely, if users intentionally misuse or violate their duty of care when using autonomous systems, they can be held legally responsible for their behaviour. Yet, intentional abnormal behaviour could include modifying or altering AI programs or machines without proper authorisation, illegal intrusion, interference, or destruction of software and hardware that maintain the system's operation. Furthermore, if the AI system issues a warning, and the user disregards it, in this scenario, they may be liable if a tragedy occurs. Hence, depending on the case's circumstances, users may be held liable for intentional or negligent actions. However, it's important to note that despite the significant progress in AI development, autonomous systems are yet to be used in everyday life. And, despite being the most popular selfdriving car in development, no one can say for sure when it will fully be officially launched. However, notably, the principle of legally stipulated violations and punishments still applies to users, even if their behaviour may be negotiable. Significantly, any accountability for their actions must be established based on complete administrative law norms.

The evaluation developer's behaviour

When accidents occur during the operation of an autonomous driving system, the behaviour of the developer should be carefully examined, in addition to that of the user. This is because the known fact when the operation of the AI is not determined by the user, but by the developer's pre-set programming. Therefore, the developer's behaviour plays a crucial role in setting of the system.

Software and its option to analyse large amounts of data are the key to the operation of the AI [15]. Therefore, when the system involves the assessment of lawlessness, the behaviour of the developer becomes the focus of discussion. Logically, when the developer intentionally manufactures technology for a purpose that is forbidden by law, likewise criminology intended cyberattacks, then the developer's behaviour will be punished as a crime. In the era of self-driving cars, vehicles are interconnected with other devices, such as other vehicles (car-to-car) and transportation facilities, through the Internet of Things. While such interconnections bring conveniences, such as traffic information and online services, they also increase the possibility of cybercriminals attacking self-driving systems and remotely altering the mode of driving. This raises concerns regarding the sufficiency of current cybercrime laws and the potential for self-driving cars to be used as tools by terrorist organizations. Moreover, the immunity from liability privilege currently enjoyed by network providers may need to be reconsidered to hold them accountable for strengthening road infrastructure and protecting vehicles on the road. It is essential for all stakeholders, including lawmakers, vehicle manufacturers, and network providers, to work collaboratively toward developing appropriate laws and regulations that address the challenges of cybercrime and the responsibilities of internet providers.

However, when the developer has no intention for criminal purposes, but the system processes in a way that causes death, injury, or other violations of criminal law, then the evaluation of the developer's behaviour becomes a complex. For instance, in the case of a self-driving car accident with cause of death or injury, it becomes necessary to examine the behaviour of the developer and determine crime elements. Typically, the concept of permissible risk [2] is rule out the wrongdoing of the developer without establishing a crime. Furthermore, this concept is widely supported by the academic community and is frequently used in judicial practice, particularly in traffic accidents. However, for risky behaviour that has a clear tendency to infringe, in the view of authors, criminal law needs moderation, yet, it should still be carefully considered. Distinctively, not all risky behaviour requires criminal law to intervene. The concept of tolerable risk [12] is used to distinguish between the risk and the scope allowed by the law. If the benefits to human society outweigh the harm, then the behaviour that creates various risks will be judged based on the severity of the case. In this respect, the lightest cases may not matter, moderate cases are subject to administrative penalties, while severe cases are subject to criminal penalties.

Driving vehicles has become an indispensable part of modern life, still the potential risks have to be defined up to the applicability of the law. Legislators must consider the rigor of risks and regulate them through administrative or criminal law accordingly through a tolerable stake - a collective concept that evaluates the benefits and dangers of certain behaviour in the context of overall human interests and social development - so, a behaviour that enhances human life and promotes social development, despite its considerable risk, should be allowed to exist in human society. This is particularly true for the AI promising inventions about great benefits but also about potential threats that developers must bear responsibility for. On the other side, the research regard if we hold developers strictly liable for every risk, it could stop innovations and hamper human civilisation. Therefore, as long as the developer does not engage in any behaviour that is prohibited by law, any forced consequences should not be marked to the developer.

Evaluation of deterrent behaviour

The previous discussion stated that neither the user nor the developer should be held criminally responsible in the event of an accident involving an autonomous system. While this view does have a positive impact, it may still be up for debate. As the capabilities of autonomous technology become more sophisticated, the scope of human authorisation may widen. This raises questions about whether the concept of permissible risk can be applied to the behaviour of developers. For instance, as technology continues to advance, humans may one day entrust the responsibility of public safety and security to machines equipped with autonomous systems. These techniques may need to exhibit a degree of aggression to fulfil their duties, but such behaviour could pose a significant risk to human life and freedom. While the use may be considered acceptable under normal circumstances, it is unclear whether developers should be held responsible for autonomous design that exhibit aggressive behaviour.

The development of offensive AI for maintaining public safety is a complex and controversial issue. To effectively carry out duties, security machines require a certain degree of offensive ability, similar to human police officers who are trained in martial arts and physical skills to respond to various situations. However, the concept of permissible risk to excuse the potentially harmful behaviour of developers in this context, in the view of authors, may not be justifiable under the law. Based on the research, the concept of permissible risk is about the balancing of interests, which considers the overall impact of risky behaviour on society and its potential benefits. While this may seem like a cautious approach at first glance, its actual implications are open to debate, and, therefore, need to compare the interests involved in order to determine whether a risk is permissible, as it is impossible to weigh which interest is overwork without doing so. At the same time the concept of permissible risk lacks a clear explanation and is relative to the interests of different subjects. Consequently, in the case of offensive intelligence, the balance of interests should be evaluated to ensure that public safety is maintained without infringing on individual rights and freedoms. Otherwise, the acceptance of established societal experiences or realities cannot be deemed accurate without clear standards. Thus, risks caused by satisfying the interests of one party cannot be ignored.

While some argue that tolerable risk is an inevitable part of living together, this does not clarify the concept's content. Additionally, social equivalence is a vague benchmark, making it difficult to use it as a reliable operational evaluation. The administrative law system expects rational behaviour from individuals and not various social roles' behaviour expectations. Moreover, society comprises multiple and diverse roles, making it challenging to define a single behaviour expectation. Therefore, the application of the concept of tolerable risk remains uncertain, and its classification as permissible one may depend on the accumulation of individual life fields or social experience, as well as well-established customs. As it was stated, the permissibility of risky behaviour is often judged based on whether it aligns with social norms, and whether it is beneficial to society. However, the AI development differs significantly from past technological advancements, and its impact on society may not be as easily foreseeable. The lack of established standards and habits for measuring benefits and risks in the emerging field makes it difficult to review the tolerance of risks. Furthermore, the concept of tolerable risk is closely related to objective attribution theory [10], where the behaviour of the perpetrator creates an inadmissible risk. Therefore, for instance, the criminal law for autonomous systems matters, particularly concerning the behaviour of the restrained, may not be fully resolved through the constitutive element of appropriateness. Instead, it may be relevant to leave these questions to illegality rather than hastily attempting to resolve the constitutive elements.

Autonomous systems represent a novel phenomenon in society. Lawmakers also have taken notice of the changing landscape in transportation, as evidenced by the U.S. Department of Transportation's guidelines released in September 2017. These guidelines encourage countries to begin considering the allocation of liability among various parties involved in the use of automated driving systems, including owners, operators, passengers, and manufacturers. The guidelines also suggest that insurance policies should be reviewed to determine who should purchase coverage and how liability should be assigned. Traditionally, humans have been held responsible for accidents involving cars. In the United States, some states require car owners to purchase auto insurance or bear personal liability for damages. In the UK, however, all faults are borne by the "operator" of the vehicle, and insurance companies may seek reimbursement from manufacturers if the accident was caused by a defect in the car. This raises the question of who exactly qualifies as an "operator". If a consumer orders a self-driving car through an app, are they considered the "operator"? The public believes that risks associated with driving will persist, and responsibility will be increasingly placed on manufacturers or developers, and software programmers. Given the uncertainty enveloping the field, it is prudent to exercise caution and avoid any behaviour that is socially or legally harmful. While behaviours that do not contravene normative legal values may not be illegal per se, they should serve as reminders to their perpetrators to exercise caution and avoid violating legal interests. Autonomous systems operate under many conditions that are beyond human comprehension. In the event of an emergency, the system's response is based on a calculation carefully designed by the developer and has nothing to do with the user. This means that human beings lack the ability and time to intervene in such situations. Hence, risk control AI measures must be put in place during the development stage and, for example, regarding the case of a selfdriving car encountering an unexpected situation and changing direction, which results in an accident. Unlike a human driver who might change direction to avoid an accident, the self-driving car's decision is based solely on the algorithm-made output. Similarly, when a security machine injures a thief while suppressing a burglary attempt, it is not considered legitimate self-defence. This is because the machine's actions are based on an automotive element rather than the resident's defence and cannot be considered a community or developer's defence. Accordingly, if an agency creates an embedded machine-learning system by supplying the possible rule options and the objective function, the implementation of an algorithm that maximizes that objective function and immediately promulgates the resulting rule should be sustained against nondelegation objections because it is functionally serving just as a measurement tool [4]. From the standpoint of the nondelegation doctrine, the use of machine learning is not conceptually any different than the constitutional use of other machines or instruments (ibid.).


Подобные документы

  • The methodology multiple models and switching for real–time estimation of center of gravity (CG) position and rollover prevention in automotive vehicles. Algorithm to determine the vehicle parameters. The efficacy estimation switched controller scheme.

    статья [238,6 K], добавлен 28.05.2012

  • History, basic stages and directions of development of the first aircraft, its operating principle and internal structure. The study of this subject the Wright brothers, assessment of their contribution to the development of aircraft, its evolution.

    презентация [1,8 M], добавлен 05.03.2015

  • Inspected damages: visual inspection of the aircrafts which are present in the hangar: damages of a fuselage, of an engine, of a wing, of a tail unit, of a landing gear. Accident emergency landings (on ground and on water); emergency water landings.

    отчет по практике [7,2 M], добавлен 25.05.2012

  • Description and operating principles of Air-Conditioning System of Tu-154. Principal scheme of ACS. Theoretical base of algorithm developing process. Functions of the system failures. Description of obtained algorithm of malfunctions discovering.

    курсовая работа [27,7 K], добавлен 01.06.2009

  • International airports serving Moscow. A special program of creating night bus and trolleybus routes. The formation of extensive tram system to transport people. The development of the subway to transport passengers to different sides of the capital.

    презентация [4,7 M], добавлен 08.08.2015

  • The first rapid-transit system. History Metropolitan Railway. Network topologies, construction stages of London's Metropolitan Railway. Safety and security. Infrastructure 5-Line of Metro de Santiago (Chile), The Soviet Union's stations, Stockholm metro.

    презентация [1,2 M], добавлен 13.05.2014

  • Применение системы нейтрального газа (onboard inert gas generation system) на воздушное судно Boeing 767. Система питания двигателей. Доработка топливной системы путем установки системы нейтрального газа. Встроенные средства диагностики контроллера.

    дипломная работа [5,5 M], добавлен 22.04.2015

  • Thus democracy and modernism are closely intertwined, each providing a driving force. Darwinism, Freudianism, Leninism and Marxism combined to throw doubt on traditional Western mores, culture and standards of behavior. Rights Without Responsibility.

    статья [20,3 K], добавлен 25.11.2011

  • Profession in the USA. Regulation of the legal profession. Lawyers: parasites of the back of the American taxpayer. The legal profession for women: a problem of gender equality. The legal system of the USA. The principles of the USA System of justice.

    курсовая работа [35,9 K], добавлен 31.08.2008

  • Overview of civil law system. History of appearance and development of the Roman-German legal family. General characteristics of civil law legal system. Sourses of the right. Distinctive features of the system. Soubgroups in the civil law system.

    курсовая работа [36,7 K], добавлен 10.08.2011

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.