Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
311 days ago

Data science is the big draw in business schools
484 days ago

7 Effective Methods for Fitting a Liner
494 days ago

3 Thoughts on Why Deep Learning Works So Well
494 days ago

3 million at risk from the rise of robots
494 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
308466 views

Here's why so many data scientists are leaving their jobs
80337 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
75303 views

2018 Data Science Interview Questions for Top Tech Companies
75039 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
60957 views

The Limits of Artificial Intelligence and Deep Learning

By Nand Kishor |Email | Feb 5, 2018 | 8358 Views

SUNDAR PICHAI, THE chief executive of Google, has said that AI "is more profound than... electricity or fire." Andrew Ng, who founded Google Brain and now invests in AI startups, wrote that "If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future."

Their enthusiasm is pardonable. There have been remarkable advances in AI, after decades of frustration. Today we can tell a voice-activated personal assistant like Alexa to "Play the band Television," or count on Facebook to tag our photographs; Google Translate is often almost as accurate as a human translator. Over the last half decade, billions of dollars in research funding and venture capital have flowed towards AI; it is the hottest course in computer science programs at MIT and Stanford. In Silicon Valley, newly minted AI specialists command half a million dollars in salary and stock.

But there are many things that people can do quickly that smart machines cannot. Natural language is beyond deep learning; new situations baffle artificial intelligences, like cows brought up short at a cattle grid. None of these shortcomings is likely to be solved soon. Once you've seen you've seen it, you can't un-see it: deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities.

To see why modern AI is good at a few things but bad at everything else, it helps to understand how deep learning works. Deep learning is math: a statistical method where computers learn to classify patterns using neural networks. Such networks possess inputs and outputs, a little like the neurons in our own brains; they are said to be "deep" when they possess multiple hidden layers that contain many nodes, with a blooming multitude of connections. Deep learning employs an algorithm called backpropagation, or backprop, that adjusts the mathematical weights between nodes, so that an input leads to the right output. In speech recognition, the phonemes c-a-t should spell the word "cat;" in image recognition, a photograph of a cat must not be labeled "a dog;" in translation, qui canem et faelem ut deos colunt should spit out "who worship dogs and cats as gods." Deep learning is "supervised" when neural nets are trained to recognize phonemes, photographs, or the relation of Latin to English using millions or billions of prior, laboriously labeled examples.

Deep learning's advances are the product of pattern recognition: neural networks memorize classes of things and more-or-less reliably know when they encounter them again. But almost all the interesting problems in cognition aren't classification problems at all. "People naively believe that if you take deep learning and scale it 100 times more layers, and add 1000 times more data, a neural net will be able to do anything a human being can do," says Fran├žois Chollet, a researcher at Google. "But that's just not true."

Gary Marcus, a professor of cognitive psychology at NYU and briefly director of Uber's AI lab, recently published a remarkable trilogy of essays, offering a critical appraisal of deep learning. Marcus believes that deep learning is not "a universal solvent, but one tool among many." And without new approaches, Marcus worries that AI is rushing toward a wall, beyond which lie all the problems that pattern recognition cannot solve. His views are quietly shared with varying degrees of intensity by most leaders in the field, with the exceptions of Yann LeCun, the director of AI research at Facebook, who curtly dismissed the argument as "all wrong," and Geoffrey Hinton, a professor emeritus at the University of Toronto and the grandfather of backpropagation, who sees "no evidence" of a looming obstacle.

According to skeptics like Marcus, deep learning is greedy, brittle, opaque, and shallow. The systems are greedy because they demand huge sets of training data. Brittle because when a neural net is given a "transfer test"-confronted with scenarios that differ from the examples used in training-it cannot contextualize the situation and frequently breaks. They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases. Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.

These limitations mean that a lot of automation will prove more elusive than AI hyperbolists imagine. "A self-driving car can drive millions of miles, but it will eventually encounter something new for which it has no experience," explains Pedro Domingos, the author of The Master Algorithm and a professor of computer science at the University of Washington. "Or consider robot control: A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch." In January, Facebook abandoned M, a text-based virtual assistant that used humans to supplement and train a deep learning system, but never offered useful suggestions or employed language naturally.

What's wrong? "It must be that we have a better learning algorithm in our heads than anything we've come up with for machines," Domingos says. We need to invent better methods of machine learning, skeptics aver. The remedy for artificial intelligence, according to Marcus, is syncretism: combining deep learning with unsupervised learning techniques that don't depend so much on labeled training data, as well as the old-fashioned description of the world with logical rules that dominated AI before the rise of deep learning. Marcus claims that our best model for intelligence is ourselves, and humans think in many different ways. His young children could learn general rules about language, and without many examples, but they were also born with innate capacities. "We are born knowing there are causal relationships in the world, that wholes can be made of parts, and that the world consists of places and objects that persist in space and time," he says. "No machine ever learned any of that stuff using backprop."

Other researchers have different ideas. "We've used the same basic paradigms [for machine learning] since the 1950s," says Pedro Domingos, "and at the end of the day, we're going to need some new ideas." Chollet looks for inspiration in program synthesis, programs that automatically create other programs. Hinton's current research explores an idea he calls "capsules," which preserves backpropagation, the algorithm for deep learning, but addresses some of its limitations.

"There are a lot of core questions in AI that are completely unsolved," says Chollet, "and even largely unasked." We must answer these questions because there are tasks that a lot of humans don't want to do, such as cleaning toilets and classifying pornography, or which intelligent machines would do better, such as discovering drugs to treat diseases. More: there are things that we can't do at all, most of which we cannot yet imagine.

Source: Wired