Lethal autonomous weapon

[5] Scholars such as Peter Asaro and Mark Gubrud believe that any weapon system that is capable of releasing a lethal force without the operation, decision, or confirmation of a human supervisor can be deemed autonomous.

Anti-personnel mines are banned in many countries by the 1997 Ottawa Treaty, not including the United States, Russia, and much of Asia and the Middle East.

Such systems can autonomously identify and attack oncoming missiles, rockets, artillery fire, aircraft and surface vessels according to criteria set by the human operator.

According to The Economist, as technology advances, future applications of unmanned undersea vehicles might include mine clearance, mine-laying, anti-submarine sensor networking in contested waters, patrolling with active sonar, resupplying manned submarines, and becoming low-cost missile platforms.

[9] In 2018, the U.S. Nuclear Posture Review alleged that Russia was developing a "new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo" named "Status 6".

[13][14][15][16] Israeli Minister Ayoob Kara stated in 2017 that Israel is developing military robots, including ones as small as flies.

[17] In October 2018, Zeng Yi, a senior executive at the Chinese defense firm Norinco, gave a speech in which he said that "In future battlegrounds, there will be no people fighting", and that the use of lethal autonomous weapons in warfare is "inevitable".

[18] In 2019, US Defense Secretary Mark Esper lashed out at China for selling drones capable of taking life with no human oversight.

[27] Current US policy states: "Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

Work said in 2016 that the Defense Department would "not delegate lethal authority to a machine to make a decision", but might need to reconsider this since "authoritarian regimes" may do so.

[35] Stuart Russell, professor of computer science from University of California, Berkeley stated the concern he has with LAWs is that his view is that it is unethical and inhumane.

[45] Potential IHL violations by LAWs are – by definition – only applicable in conflict settings that involve the need to distinguish between combatants and civilians.

[49] There are websites[clarification needed] protesting the development of LAWs by presenting undesirable ramifications if research into the appliance of artificial intelligence to designation of weapons continues.

In November 2018, Archbishop Ivan Jurkovic, the permanent observer of the Holy See to the United Nations, stated that “In order to prevent an arms race and the increase of inequalities and instability, it is an imperative duty to act promptly: now is the time to prevent LAWs from becoming the reality of tomorrow’s warfare.” The Church worries that these weapons systems have the capability to irreversibly alter the nature of warfare, create detachment from human agency and put in question the humanity of societies.

[57] Military AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal ('Track II') diplomacy by communities of experts, together with a legal and political verification process.

[58][59][60][61] In 2021, the United States Department of Defense requested a dialogue with the Chinese People's Liberation Army on AI-enabled autonomous weapons but was refused.

Serbian Land Rover Defender towing trailer with "Miloš" tracked combat robot
An STM Kargu drone
Rally on the steps of San Francisco City Hall, protesting against a vote to authorize police use of deadly force robots.
Rally on the steps of San Francisco City Hall , protesting against a vote to authorize police use of deadly force robots