Lethal Autonomous Weapons Systems (LAWS): Security, Moral and Humanitarian Implications
Abstract
Lethal Autonomous Weapons Systems (LAWS) have been called “the third revolution in warfare, after gunpowder and nuclear arms”. An autonomous weapon is a system that has the “ability to select and engage targets without human intervention” and is meant to make decisions by itself without human agency. Human Rights Watch (HRW) has dubbed this phenomenon as “human-out -of-the loop”. It is not to be confused with either drones or automated defense systems. The former is remotely piloted by a person, who has the ultimate decision to specifically select targets and hit them, whereas automated defense systems are human-supervised and work in structured and defined roles, fed into their programming. As the British Ministry of Defense says in its reports, “We are not talking about cruise missiles or remotely piloted drones, but about, for example, flying robots that search for human beings in a city and eliminate those who appear to meet certain criteria.” International Committee of the Red Cross (ICRC) in Convention on Certain Conventional Weapons (CCW) in its Meeting of Experts defined it as: “An autonomous weapon system is one that has autonomy in its critical functions, meaning a weapon that can select (i.e. search for, detect, identify, track) and attack (i.e. intercept, use force against, neutralize, damage or destroy) targets without human intervention”.