Artificial Intelligence led Lethal Autonomous Weapon Systems and Terrorism: Risk Assessment and Solutions for Pakistan
- Artificial Intelligence,
- Lethal Autonomous Weapon Systems,
- Drones,
- Terrorism,
- National Security
Abstract
Artificial Intelligence (AI) is undergoing a paradigm shift in the contemporary world and has emerged as a transformative force with significant military applications. As the integration of AI into military systems expands, concerns about the potential risks associated with autonomous weapons have heightened, raising apprehensions about national security in the face of possible terrorist threats. This article examines the evolution of AI in a military context while focusing on Lethal Autonomous Weapon Systems (LAWS). LAWS, robotic weapons, or killer robots are autonomous weapons capable of independently navigating and engaging selective targets. The lack of coherence within the international strategic community in defining autonomous systems has led to legislative complications in controlling or regulating these weapons. Moreover, the potential for non-state actors to exploit AI-driven weapon technologies, such as autonomous drones and unmanned ground vehicles, poses profound challenges for Pakistan’s national security. In response to these challenges, this article explores potential and possible solutions, addressing the ethical, legal, and strategic dimensions of managing AI-led LAWS to ensure responsible use and prevent unauthorized access to these weapons by non-state actors.