"Lethal autonomous weapons threaten to become the third revolution in warfare," warned the letter. "Once developed, they will permit armed conflict to be fought at a scale greater than ever, and faster than humans can comprehend."
It's the second open letter to the UN. The first was signed by more than 1000 leading scientists and tech leaders, including Stephen Hawking, Elon Musk and Steve Wozniak.
This is one of the great ethical debates of our time and the industry that builds the robots is clearly against using them this way. This is a bad sign.
These are the companies working on AI for our homes and cars. They're at the cutting edge of this technology and they do not want to see a gun in a robot's hand.
Tesla founder Musk, arguably the face of tech, is a fierce critic of AI. He firmly believes that the arms race between America, China and Russia could spark World War 3.
Terrorists could also subvert AI weaponry, by buying it on the black market or hacking, and Musk
He argues that future conflicts could be handled by the machines, which would eliminate human casualties.
"When one party's drones are destroyed by the drones of another, then they have no choice but to surrender," he said.
It sounds a rather fanciful and sanitized version of war.
America might have the right solution for AI
America, however, has a different view on AI.
The US Military has a clear directive that autonomous weapons cannot take lives without the express authorization of a human operator.
Essentially, the US Military shares the tech industry's reservations and is determined to keep control of any AI systems.
General Paul J. Selva, the Vice Chairman of the Joint Chiefs of Staff, claimed that the USA was a decade away from being able to produce a completely independent Terminator-style robot. He swiftly added, though, that the US has no intention of building it.
"We must keep the ethical rules of war in place, lest we unleash on humanity a set of robots that we don't know how to control," said Selva. "I don't think it's reasonable for us to put robots in charge of whether or not we take a human life."
Even this clear rule gives the US a great deal of leeway, though, as a human operator could conceivably authorize a large amount of "deadly force" with each authorized strike.
What is AI capable of?
AI can power military jets and ships, take split second decisions off a pilot's hand and control an army of armed drones. The potential is almost limitless and in the end AI could control vast waves of robot soldiers at once.
It could make the most efficient choice each and every time and bring an army of drones, armed vehicles and futuristic robotic soldiers that make The Terminator look like old technology.
AI has the power to predict, strategize and control a war, from anywhere in the world.
It is mind-blowing tech, but we just do not know if we can control it.
Funding issues mean it simply hasnâ??t made any meaningful progress, though, and the legislators face being left behind by the innovators that have already stolen a march.
The UN's response to this second open letter is important. America has already started to invest substantial sums, Russia and China are both in on this arms race that could potentially blow up in our face at any time.
A ban needs to come soon, or it will simply be too late to turn back the tide of AI weapons that could change the face of modern warfare.
We need to have this debate
Military hardware powered by AI could save lives in the end. As with the prospect of nuclear war, the sheer presence of AI could be enough to dissuade countries from going to war.
But we don't fully understand AI right now and the arms race between nations is pushing the limits of this technology.
That means it has the potential to go wrong. So, we need to know we can switch off the machines if they do turn on us and we need to know that we are the masters of this technology.
Kalashnikov's big launch will be the first of many AI-powered prototypes and if we want to put the brakes on this new concept in modern warfare then we must do it now.
The fate of the human race could rest on these decisions and it's a big concern that the industry isn't waiting for any UN directive.
So, it's time for the UN to step up, to engage with the tech world and to see why the people that build AI don't think it should be trusted with a loaded gun.
We're about to open Pandora's Box, but there's still time to find out how to close.