Nejméně od roku 2010 sílí debata nad otázkou možného preventivního zákazu nebo regulace tzv. autonomních zbraňových systémů (AWS). Jde o roboty schopné po aktivaci samostatně najít a zaútožit na předem definovaný typ cíle, a to bez dalšího řízení lidmi. Částečně autonomní (především defenzivní) systémy jsou již v provozu a další jsou v procesu vývoje, a to v USA i jinde. Blíží se doba, kdy se tyto otázky stanou aktuální.
Ostatně, již dnes americké letectvo trénuje více operátorů bezpilotních letounů (tzv. dronů), než tradičních pilotů a samo očekává, že během následujících desetiletí bude autonomní roboty využívat. Vždyť dronů bude tolik, že nebude možné, aby každého zvlášť řídil jeden člověk, který zároveň nebude schopen rychle zpracovat spousty informací nezbytných k letu a boji. Drony budou schopné samostatného vzletu a přistání, budou vzájemně spolupracovat, koordinovat svůj pohyb a hledat cíle. V tomto postu krátce uvedu do textů, které se zabývají stanovením standardů posuzování AWS a relevantními právními otázkami.
A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.
Today, humans are still very much “in the loop”: humans decide when to launch a drone, where it should fly, etc. But as drones develop greater autonomy, humans will increasingly be out of the loop. Tomorrow’s drones are expected to leap from “automation” to true “autonomy.” Authors argue that language useful to the policymaking process has already been developed in the same places as drones themselves — research and engineering laboratories around the country and abroad. Authors introduce this vocabulary in the paper to explain how tomorrow’s drones will differ from today’s, outline the issues apt to follow, and suggest possible approaches to regulation.
The author examines the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system in autonomous robot systems, taking into account the Laws of War and Rules of Engagement. (Recenze na Just Security.)
In response to the key legal issues, such as the performance of adequate collateral damage assessments and compliance with the appropriate identification standards for individual targets, the authors argue that the input of lawyers during engineering tests and evaluations can help ensure that the weapons comply with international law. For example, the capabilities of an automated or autonomous weapon may be programmed only to include acts that are squarely within the confines of international law, regardless of the ultimate capabilities of the weapon.
Alston calls for the convening of an expert panel to examine developing robotic technologies. Among the issues and concerns to be addressed by the panel are the identification of uniform definitions to describe characteristics of the developing weapons, possible benefits conferred by AWS in prevention of both civilian and military casualties, potential risks of an inadequate scheme for international and criminal accountability, requirements for testing the reliability and performance of AWS, and use of force threshold concerns.
Heyns repeats the call of his predecessor, Philip Alston, for the convening of a high-level panel of experts to address the legal and moral challenges presented by Lethal Autonomous Weapons (LAW). The most pressing issues for this panel to address include: (i) the possibility that LAWs cannot be designed to comply with the law of armed conflict; (ii) the threat they pose to the right to life under treaty and customary international law; (iii) the legal accountability vacuum for LAWs’ actions; (iv) and the ethical position that “robots should not have the power of life and death over human beings.” Heyns also calls for national moratoria on the testing, production, assembly, transfer, acquisition, deployment, and use of LAWs.
Fully autonomous weapons cannot meet the standards of the law of armed conflict and should therefore be banned. In particular, the report argues that such robots would not be capable of making complex and subjective decisions such as determining when an enemy fighter has gone from being a legitimate target to being hors de combat, making an attack on his or her life illegal. The report argues that the use of robots would violate the rules of proportionality, distinction, and military necessity (requiring that lethal force be used only “to the extent necessary for winning the war”). Furthermore, the report suggests that fully autonomous weapons may violate the Martens Clause, which prohibits the use of weapons that are contrary to the “dictates of public conscience.”
Sharkey advocates for a total ban on the development of autonomous robotic weapons. Sharkey argues that public acceptance of the idea of autonomous weapons is due to a cultural myth of anthropomorphism, perpetuated in part by science fiction.
Article argues in favor of a theoretical foundation for a ban of autonomous weapon systems based on human rights and humanitarian. In particular, an implicit requirement for human judgment can be found in international humanitarian law governing armed conflict, notably the requirements of proportionality, distinction, and military necessity as well as in human rights law and the rights to life and due process. Because he believes machines are incapable of exercising human judgment, Asaro calls for an international treaty to ban robotic weapons that are capable of initiating lethal force.
This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed: the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare.
More than 270 engineers, computing and artificial intelligence experts, roboticists, and professionals from related disciplines are calling for a ban on the development and deployment of weapon systems that make the decision to apply violent force autonomously, without any human control.
Law professors Waxman and Anderson argue against a ban on AWS and contend that the international codes and norms governing autonomous weapons should be developed gradually and incrementally, throughout the design and development process. Waxman and Anderson argue that not only are autonomous weapons inevitable but also their potential for greater precision could make them less harmful to civilians, and, therefore, a ban would be “ethically questionable.” Furthermore, given the military and national security advantages of such weapons, Waxman and Anderson doubt that an international ban would be feasible or effective.
Professors of international law argue that autonomous weapons should not be categorically and preemptively banned. The authors argue that there are significant potential national security advantages to be gained by developing AWS. Only weapons that per se cannot be used in a humane manner or those that cause unnecessary human suffering (such as biological weapons) should be banned outright. The authors argue that there is nothing inherent to AWS that would necessarily violate the law of armed conflict and that AWS could potentially be used in conformity with the laws of war.
Bez ohledu na výše uvedené texty k AWS dnes spíš probíhá debata k současným “hloupým” dronům, tedy bezpilotním letadlům používaným např. americkou armádou v Jemenu.
If used in strict compliance with the principles of international humanitarian law, remotely piloted aircraft are capable of reducing the risk of civilian casualties in armed conflict by significantly improving the situational awareness of military commanders.
The report is a qualitative assessment based on detailed field research into nine of the 45 reported strikes that occurred in Pakistan's North Waziristan tribal agency between January 2012 and August 2013 and a survey of publicly available information on all reported drone strikes in Pakistan over the same period.
The 97-page report examines six US targeted killings in Yemen, one from 2009 and the rest from 2012-2013. Two of the attacks killed civilians indiscriminately in clear violation of the laws of war; the others may have targeted people who were not legitimate military objectives or caused disproportionate civilian deaths.