Rajendra

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
716 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
720 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
721 days ago

Humans have some learning to do in an A.I. led world
721 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
722 days ago

Google AI can create better machine-learning code than the researchers who made it
73170 views

More than 1 lakh scholarship on offer by Google, Know how to apply
59880 views

13-year-old Indian AI developer vows to train 100,000 coders
41343 views

Pornhub is using machine learning to automatically tag its 5 million videos
37539 views

Rise of the sex robots: Life-like doll goes on sale for 15,000 pound
33291 views

Elon Musk Claims We Only Have a 10 Percent Chance of Making AI Safe

By Rajendra |Email | Nov 23, 2017 | 24012 Views

Outlook Not So Good

Elon Musk has put a lot of thought into the harsh realities and wild possibilities of artificial intelligence (AI). These considerations have left him convinced that we need to merge with machines if we're to survive, and he's even created a startup dedicated to developing the brain-computer interface (BCI) technology needed to make that happen. But despite the fact that his very own lab, OpenAI, has created an AI capable of teaching itself, Musk recently said that efforts to make AI safe only have "a five to 10 percent chance of success."

Musk shared these less-than-stellar odds with the staff at Neuralink, the aforementioned BCI startup, according to recent Rolling Stone article. Despite Musk's heavy involvement in the advancement of AI, he's openly acknowledged that the technology brings with it not only the potential for, but the promise of serious problems.



The challenges to making AI safe are twofold.

First, a major goal of AI - and one that OpenAI is already pursuing - is building AI that's not only smarter than humans, but that is capable of learning independently, without any human programming or interference. Where that ability could take it is unknown.

Then there is the fact that machines do not have morals, remorse, or emotions. Future AI might be capable of distinguishing between "good" and "bad" actions, but distinctly human feelings remain just that - human.

In the Rolling Stone article, Musk further elaborated on the dangers and problems that currently exist with AI, one of which is the potential for just a few companies to essentially control the AI sector. He cited Google's DeepMind as a prime example.

"Between Facebook, Google, and Amazon - and arguably Apple, but they seem to care about privacy - they have more information about you than you can remember," said Musk. "There's a lot of risk in concentration of power. So if AGI [artificial general intelligence] represents an extreme level of power, should that be controlled by a few people at Google with no oversight?"

Worth the Risk?

Experts are divided on Musk's assertion that we probably can't make AI safe. Facebook founder Mark Zuckerberg has said he's optimistic about humanity's future with AI, calling Musk's warnings "pretty irresponsible." Meanwhile, Stephen Hawking has made public statements wholeheartedly expressing his belief that AI systems pose enough of a risk to humanity that they may replace us altogether.

Sergey Nikolenko, a Russian computer scientist who specializes in machine learning and network algorithms, recently shared his thoughts on the matter with Futurism. "I feel that we are still lacking the necessary basic understanding and methodology to achieve serious results on strong AI, the AI alignment problem, and other related problems," said Nikolenko.



As for today's AI, he thinks we have nothing to worry about. "I can bet any money that modern neural networks will not suddenly wake up and decide to overthrow their human overlord," said Nikolenko.

Musk himself might agree with that, but his sentiments are likely more focused on how future AI may build on what we have today.

Already, we have AI systems capable of creating AI systems, ones that can communicate in their own languages, and ones that are naturally curious. While the singularity and a robot uprising are strictly science fiction tropes today, such AI progress makes them seem like genuine possibilities for the world of tomorrow.

But these fears aren't necessarily enough reason to stop moving forward. We also have AIs that can diagnose cancer, identify suicidal behavior, and help stop sex trafficking.

The technology has the potential to save and improve lives globally, so while we must consider ways to make AI safe through future regulation, Musk's words of warning are, ultimately, just one man's opinion.

He even said as much himself to Rolling Stone: "I don't have all the answers. Let me be really clear about that. I'm trying to figure out the set of actions I can take that are more likely to result in a good future. If you have suggestions in that regard, please tell me what they are."

Source: Futurism