01 prosince 2013

Lukáš Hoder: Autonomní zbraňové systémy a právo (reading list)

Nejméně od roku 2010 sílí debata nad otázkou možného preventivního zákazu nebo regulace tzv. autonomních zbraňových systémů (AWS). Jde o roboty schopné po aktivaci samostatně najít a zaútožit na předem definovaný typ cíle, a to bez dalšího řízení lidmi. Částečně autonomní (především defenzivní) systémy jsou již v provozu a další jsou v procesu vývoje, a to v USA i jinde. Blíží se doba, kdy se tyto otázky stanou aktuální. 

Ostatně, již dnes americké letectvo trénuje více operátorů bezpilotních letounů (tzv. dronů), než tradičních pilotů a samo očekává, že během následujících desetiletí bude autonomní roboty využívat. Vždyť dronů bude tolik, že nebude možné, aby každého zvlášť řídil jeden člověk, který zároveň nebude schopen rychle zpracovat spousty informací nezbytných k letu a boji. Drony budou schopné samostatného vzletu a přistání, budou vzájemně spolupracovat, koordinovat svůj pohyb a hledat cíle. V tomto postu krátce uvedu do textů, které se zabývají stanovením standardů posuzování AWS a relevantními právními otázkami.


Autonomní?


Americké ministerstvo obrany definuje AWS následovně:

A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.

Lidé tedy budou moci robota úkolovat a měnit jeho program, ale ten bude schopen fungovat i samostatně. V současnosti na podobném principu existují obranné raketové systémy jako americký PATRIOT nebo nedávno proslavený IRON DOME, který chránil Izrael před raketami z Gazy. Vývoj přitom rychle pokračuje vpřed. Americké letectvo testuje schopnost letounu X-47B automaticky vzlétat a přistávat na letadlové lodi, v Afghánistánu už létaly zcela (!) autonomní helikoptéry a britská armáda vyvíjí autonomní letoun Taranis. I proto zintenzivnily debaty právníků na téma, co s tím




The loop

Ještě než přikročím k přehledu literatury, stručně představím koncept “OODA Loop”, který po válce v Koreji vytvořil bývalý vojenský pilot a teoretik John Boyd. Jedná se v zásadě o teorii boje stíhačů, která byla následně vytužita i v komerční sféře a současní američtí armádní stratégové ji využívají k popsání autonomie strojů (např. v dokumentu United States Air Force Unmanned Aircraft Systems Flight Plan 2009-2047).

Proč v koreji vítězili američtí piloti se stroji F-18 nad sovětskými MiG-5? Podle Boyda proto, že rychleji zvládny proces OODA, tedy Observe (pozorovat), Orient (pochopit), Decide (rozhodnout), Act (jednat). Tato teorie jednání je použitelná i k hodnocení míry autonomie strojů; jak moc je daný robot závislý na člověku v rámci každé z výše uvedených čtyř činností? Má pouze schopnost “automatizace” (je nesamostatný; neumí se rozhodovat) nebo je už “autonomní”?

Má člověk zůstat aktivní v rámci “smyčky” (in the loop)? Nebo má jen kontrolovat (on the loop)? Kdo by pak odpovídal za jednání autonomních robotů? Mohou vůbec AWS z definice splňovat podmínky humanitárního práva? Jak regulovat AWS?

Reading List

Tento reading list vychází z postu na blogu Just Security a z dalších blogů, z debaty okolo nedávno vydaných zpráv OSN atd. Popisky jsou editované verze abstraktů k článkům a knihám.

1) The Loop

William Marra, Sonia McNeil, Understanding 'The Loop': Regulating the Next Generation of War Machines, Harvard Journal of Law and Public Policy (2013)
Today, humans are still very much “in the loop”: humans decide when to launch a drone, where it should fly, etc. But as drones develop greater autonomy, humans will increasingly be out of the loop. Tomorrow’s drones are expected to leap from “automation” to true “autonomy.” Authors argue that language useful to the policymaking process has already been developed in the same places as drones themselves — research and engineering laboratories around the country and abroad. Authors introduce this vocabulary in the paper to explain how tomorrow’s drones will differ from today’s, outline the issues apt to follow, and suggest possible approaches to regulation.
Ronald C. Arkin, Governing Lethal Behavior in Autonomous Robots, CRC Press (2009)
The author examines the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system in autonomous robot systems, taking into account the Laws of War and Rules of Engagement. (Recenze na Just Security.)
Alan Backstrom & Ian Henderson, New Capabilities in Warfare, International Review of the Red Cross (2012)
In response to the key legal issues, such as the performance of adequate collateral damage assessments and compliance with the appropriate identification standards for individual targets, the authors argue that the input of lawyers during engineering tests and evaluations can help ensure that the weapons comply with international law. For example, the capabilities of an automated or autonomous weapon may be programmed only to include acts that are squarely within the confines of international law, regardless of the ultimate capabilities of the weapon.

2) Zprávy OSN

Alston calls for the convening of an expert panel to examine developing robotic technologies. Among the issues and concerns to be addressed by the panel are the identification of uniform definitions to describe characteristics of the developing weapons, possible benefits conferred by AWS in prevention of both civilian and military casualties, potential risks of an inadequate scheme for international and criminal accountability, requirements for testing the reliability and performance of AWS, and use of force threshold concerns.
Heyns repeats the call of his predecessor, Philip Alston, for the convening of a high-level panel of experts to address the legal and moral challenges presented by Lethal Autonomous Weapons (LAW). The most pressing issues for this panel to address include: (i) the possibility that LAWs cannot be designed to comply with the law of armed conflict; (ii) the threat they pose to the right to life under treaty and customary international law; (iii) the legal accountability vacuum for LAWs’ actions; (iv) and the ethical position that “robots should not have the power of life and death over human beings.” Heyns also calls for national moratoria on the testing, production, assembly, transfer, acquisition, deployment, and use of LAWs.

3) Argumenty pro preventivní zákaz AWS

Human Rights Watch and the Harvard Law School Human Rights Clinic, Losing Humanity: The Case Against Killer Robots (2012)
Fully autonomous weapons cannot meet the standards of the law of armed conflict and should therefore be banned. In particular, the report argues that such robots would not be capable of making complex and subjective decisions such as determining when an enemy fighter has gone from being a legitimate target to being hors de combat, making an attack on his or her life illegal. The report argues that the use of robots would violate the rules of proportionality, distinction, and military necessity (requiring that lethal force be used only “to the extent necessary for winning the war”). Furthermore, the report suggests that fully autonomous weapons may violate the Martens Clause, which prohibits the use of weapons that are contrary to the “dictates of public conscience.”
Noel Sharkey, The Evitability of Autonomous Robot Warfare, International Review of the Red Cross (2013)
Sharkey advocates for a total ban on the development of autonomous robotic weapons. Sharkey argues that public acceptance of the idea of autonomous weapons is due to a cultural myth of anthropomorphism, perpetuated in part by science fiction.
Peter Asaro, On Banning Autonomous Weapon Systems, International Review of the Red Cross (2012)
Article argues in favor of a theoretical foundation for a ban of autonomous weapon systems based on human rights and humanitarian. In particular, an implicit requirement for human judgment can be found in international humanitarian law governing armed conflict, notably the requirements of proportionality, distinction, and military necessity as well as in human rights law and the rights to life and due process. Because he believes machines are incapable of exercising human judgment, Asaro calls for an international treaty to ban robotic weapons that are capable of initiating lethal force.
Robert Sparrow, Killer Robots, Journal of Applied Philosophy (2007)
This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed: the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare.
International Committee for Robot Arms Control, Computing experts from 37 countries call for ban on killer robots (2013)
More than 270 engineers, computing and artificial intelligence experts, roboticists, and professionals from related disciplines are calling for a ban on the development and deployment of weapon systems that make the decision to apply violent force autonomously, without any human control.

4) Argumenty proti zákazu AWS

Law professors Waxman and Anderson argue against a ban on AWS and contend that the international codes and norms governing autonomous weapons should be developed gradually and incrementally, throughout the design and development process. Waxman and Anderson argue that not only are autonomous weapons inevitable but also their potential for greater precision could make them less harmful to civilians, and, therefore, a ban would be “ethically questionable.” Furthermore, given the military and national security advantages of such weapons, Waxman and Anderson doubt that an international ban would be feasible or effective.
Michael N. Schmitt & Jeffrey S. Thurnher, “Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict, Harvard National Security Journal (2013)
Professors of international law argue that autonomous weapons should not be categorically and preemptively banned. The authors argue that there are significant potential national security advantages to be gained by developing AWS. Only weapons that per se cannot be used in a humane manner or those that cause unnecessary human suffering (such as biological weapons) should be banned outright. The authors argue that there is nothing inherent to AWS that would necessarily violate the law of armed conflict and that AWS could potentially be used in conformity with the laws of war.

5) Drony

Bez ohledu na výše uvedené texty k AWS dnes spíš probíhá debata k současným “hloupým” dronům, tedy bezpilotním letadlům používaným např. americkou armádou v Jemenu.

If used in strict compliance with the principles of international humanitarian law, remotely piloted aircraft are capable of reducing the risk of civilian casualties in armed conflict by significantly improving the situational awareness of military commanders.
Amnesty International, "Will I be Next?" US Drone Strikes in Pakistan (2013)
The report is a qualitative assessment based on detailed field research into nine of the 45 reported strikes that occurred in Pakistan's North Waziristan tribal agency between January 2012 and August 2013 and a survey of publicly available information on all reported drone strikes in Pakistan over the same period.
The 97-page report examines six US targeted killings in Yemen, one from 2009 and the rest from 2012-2013. Two of the attacks killed civilians indiscriminately in clear violation of the laws of war; the others may have targeted people who were not legitimate military objectives or caused disproportionate civilian deaths.
Petra Nováčková, Vyšly nové zprávy OSN a nevládních organizací o útocích bezpilotních letounů, Bulletin Centra pro lidská práva a demokratizaci (2013)


3 komentáře:

  1. Martin Bílý3/12/13 22:52

    Zbraňový systém, který si vybírá, jaký cíl zasáhne, mi přijde mnohem lepší, než zbraňový systém, který si nevybírá, jaký cíl zasáhne.

    OdpovědětVymazat
  2. Anonymní4/12/13 10:52

    Dobry den, len k faktickej presnosti - V korejskom konflikte nelietali lietadla typu F-18 ani MiG-5. zrejme boli myslene stroje F-86 a MiG-15.

    OdpovědětVymazat
  3. Díky, zřejmě máte pravdu. Vycházel jsem z článku "Understanding 'The Loop': Regulating the Next Generation of War Machines", kde píší o F-18 a MiG-5.

    OdpovědětVymazat