How soon will the 'killer robots' come to life? Experts say we shouldn't worry.
Artificial intelligence, or AI, seems to be everywhere. Home computers and smartphones now possess speech recognition. Social media platforms are able to tag and label faces and geographic locations. Companies are moving fast to develop vehicles with self-driving capabilities. This type of intelligence exhibited by machines has invaded most activities and new discoveries are constantly announced as part of a global trend to automate as much of our lives as possible.
With technology rapidly evolving, governments are taking a closer look at AI as a way to further their strategic interests. One such area, national defense, is sparking debate.
In the past two years, the U.S. government has sponsored six studies on the future role of artificial intelligence in governance and national security, a Harvard report shows. Meanwhile, China and Russia have just announced they are investing in a startup that aims to perfect facial recognition in order to identify potential crimes and threats before they actually occur, according to Bloomberg. The European Council of Foreign Relations says 90 countries now have military drones and 11 have armed drones. All but three EU states have military drones.
"There will be a substantial impact on the military because the capabilities of all kinds of systems will change and there will be an introduction of automation on a lot of different functions," said Edward Felten, director of the Center for Information Technology Policy at Princeton University, who was among a group of experts who spoke this week at a security forum at the Center for Strategic and International Studies in Washington, D.C.
The ways AI may affect countries' military capabilities may lead to futuristic scenarios.
"There is a discussion about an existential risk, this idea that we will be creating super intelligent machines that will enslave us," said Felten.
Although technological advancement has always caused concern, experts say current talks should not be about lethal machines, but about the thought process put into them.
"This is distracting us from conversations we'd need to have about ethics and bias and justice that are legitimate right now," said Steve Mills, director of machine intelligence at Booz Allen Hamilton, a Virginia-based management consulting company focusing on cybersecurity.
Before debating whether "killer robots" will soon become a reality, experts say more problematic are AI's short-term implications in warfare, such as its ability to distort reality. For instance, artificial intelligence can manipulate video and audio content used to make decisions. Experts say such distortions could affect and even damage countries' military alliances.
"We have had this fundamental truth for all history that if you can see it or you can hear it, it is fact. AI can take that away from us," Mills said.
Countries' security systems may benefit from AI with improved defense against cyberattacks, experts say. Yet machine learning could also be a powerful weapon.
"Machine learning can help less sophisticated cyberactors do more sophisticated cyberattacks," said Katherine Charlet, director of Technology and International Affairs Program at the Carnegie Endowment for International Peace, a foreign policy think tank with various centers around the world.
The implications are global. Although investments in military technology are expensive and mainly the initiative of richer countries such as the U.S. or China that have the biggest defense budgets, experts say they will not be discarded by other countries either.
"Everybody sees a real imperative to have this technology,‚?? Mills said.
In order to tackle the negative implications of AI in national defense, experts believe it's becoming more important to understand the boundaries and the responsibilities humans have when dealing with machines.
"We're very far away from the moment when we could say a machine is a legal person that could be held responsible directly," Felten said. "You will still need to have a person or an organization that is responsible for what's happening, and the challenge is to make sure that that person or organization is actually able to control and influence what might go wrong."
This will be done by extensive training for very human decision-makers.
"We need to focus on early child education, higher education, alternatives to traditional education, on workers whose jobs might be impacted by any form of automation and how they should be prepared for the next set of jobs that will become available," said Aaron Cooper, vice president of global policy at BSA‚?? The Software Alliance, an advocate for the global software industry before governments and in the international marketplace. "If we start in 10 years from now, we're starting way too late."
In the end, AI could cause as many problems as the number of benefits it can bring about, and experts say governments should embrace more of a proactive approach rather than reacting to issues when they occur.
"We should be excited and we should worry," Charlet said. "And it's the same with any technology."
Source: US News