Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
311 days ago

Data science is the big draw in business schools
484 days ago

7 Effective Methods for Fitting a Liner
494 days ago

3 Thoughts on Why Deep Learning Works So Well
494 days ago

3 million at risk from the rise of robots
494 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
308442 views

Here's why so many data scientists are leaving their jobs
80337 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
75303 views

2018 Data Science Interview Questions for Top Tech Companies
75039 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
60957 views

Bill Gates Says We Shouldn't Panic About Artificial Intelligence

By Nand Kishor |Email | Sep 27, 2017 | 10176 Views

In a recent interview, Microsoft co-founder and billionaire philanthropist Bill Gates told The Wall Street Journal he disagrees with Elon Musk's assertions that artificial intelligence is a significant threat to humanity.


EVERYONE HAS AN OPINION
Artificial intelligence (AI) is one of today's hottest topics. In fact, it's so hot that many of the tech industry's heavyweights - Apple, Google, Amazon, Microsoft, etc. - have been investing huge sums of money to improve their machine-learning technologies.

An ongoing debate rages on alongside all this AI development, and in one corner is SpaceX CEO and OpenAI co-chairman Elon Musk, who has been issuing repeated warnings about AI as a potential threat to humankind's existence.

Speaking to a group of U.S. governors a couple of months back, Musk again warned about the dangers of unregulated AI. This was criticized by those on the other side of the debate as "fear-mongering," and Facebook founder and CEO Mark Zuckerberg explicitly called Musk out for it.

Now, Microsoft co-founder and billionaire philanthropist Bill Gates is sharing his opinion on Musk's assertions.

In a rare joint interview with Microsoft's current CEO Satya Nadella, Gates told WSJ. Magazine that the subject of AI is "a case where Elon and I disagree." According to Gates, "The so-called control problem that Elon is worried about isn't something that people should feel is imminent. We shouldn't panic about it."

FEAR OF AI?
While the perks of AI are rather obvious - optimized processes, autonomous vehicles, and generally smarter machines - Musk is simply pointing out the other side of the coin. With some nations intent on developing autonomous weapons systems, irresponsible AI development has an undeniable potential for destruction. Musk's strong language may make him sound like he's overreacting, but is he?

As he's always been sure to point out, Musk isn't against AI. All he's advocating is informed policy-making to ensure that these potential dangers don't get in the way of the benefits AI can deliver.

In that, Musk isn't alone. Not all experts think his warnings are farfetched, and several have joined Musk in sending an open-letter to the United Nations about the need for clear policies to govern AI. Even before that, other groups of AI experts had called for the same.



Judging by what Nadella told the WSJ. Magazine, much of this conflict may actually be mostly imagined. "The core AI principle that guides us at this stage is: How do we bet on humans and enhance their capability? There are still a lot of design decisions that get made, even in a self-learning system, that humans can be accountable for," he said.

"There's a lot I think we can do to shape our own future instead of thinking, ??This is just going to happen to us'," Nadella added. "Control is a choice. We should try to keep that control."

In the end, it's not so much AI itself that we should watch out for. It's how human beings use it. The enemy here is not technology. It's recklessness

Source: Futurism