There has been much speculation about the power and dangers of artificial intelligence (AI), but it's been primarily focused on what AI will do to our jobs in the very near future. Now, there's discussion among tech leaders, governments and journalists about how artificial intelligence is making lethal autonomous weapons systems possible and what could transpire if this technology falls into the hands of a rogue state or terrorist organization. Debates on the moral and legal implications of autonomous weapons have begun and there are no easy answers.
Autonomous weapons already developed
The United Nations recently discussed the use of autonomous weapons and the possibility to institute an international ban on "killer robots." This debate comes on the heels of more than 100 leaders from the artificial intelligence community, including Tesla's Elon Musk and Alphabet's Mustafa Suleyman, warning that these weapons could lead to a "third revolution in warfare."
Although artificial intelligence has enabled improvements and efficiencies in many sectors of our economy from entertainment to transportation to healthcare, when it comes to weaponized machines being able to function without intervention from humans, a lot of questions are raised.
There are already a number of weapons systems with varying levels of human involvement that are actively being tested today.