I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...
I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing
A revolution in warfare where killer robots, or autonomous weapons systems, are common in battlefields is about to start.
Both scientists and industry are worried.
The worlds top artificial intelligence (AI) and robotics companies have used a conference in Melbourne to collectively urge the United Nations to ban killer robots or lethal autonomous weapons.
An open letter by 116 founders of robotics and artificial intelligence companies from 26 countries was launched at the worlds biggest artificial intelligence conference, the International Joint Conference on Artificial Intelligence (IJCAI), as the UN delays meeting until later this year to discuss the robot arms race.
Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, released the letter at the opening of the opening of the conference, the worlds pre-eminent gathering of experts in artificial intelligence and robotics.
The letter is the first time that AI and robotics companies have taken a joint stand on the issue. Previously, only a single company, Canada's Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.
In December 2016, 123 member nations of the UN's Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal talks on autonomous weapons. Of these, 19 have already called for a ban.
"Lethal autonomous weapons threaten to become the third revolution in warfare," the letter says.
"Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
"These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandoras box is opened, it will be hard to close."
Signatories of the 2017 letter include:
Elon Musk, founder of Tesla, SpaceX and OpenAI (US)
Mustafa Suleyman, founder and Head of Applied AI at Google's DeepMind (UK)
Esben stergaard, founder & CTO of Universal Robotics (Denmark)
Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
rgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)
Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons.
The 2015 letter was signed by thousands of researchers working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky.
"Nearly every technology can be used for good and bad, and artificial intelligence is no different," says Walsh.
"It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.
"We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons," he added."
Ryan Gariepy, founder of Clearpath Robotics, says the number of prominent companies and individuals who have signed this letter reinforces the warning that this is not a hypothetical scenario but a very real and pressing concern.
"We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability," he says.
"The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale."