Rajendra

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
568 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
572 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
573 days ago

Humans have some learning to do in an A.I. led world
573 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
574 days ago

Google AI can create better machine-learning code than the researchers who made it
71415 views

More than 1 lakh scholarship on offer by Google, Know how to apply
57852 views

13-year-old Indian AI developer vows to train 100,000 coders
39573 views

Pornhub is using machine learning to automatically tag its 5 million videos
36084 views

How to win in the AI era? For now, it's all about the data
27786 views

How U of T's 'godfather' of deep learning is reimagining AI

By Rajendra |Email | Nov 3, 2017 | 15927 Views

Geoffrey Hinton may be the "godfather" of deep learning, a suddenly hot field of artificial intelligence, or AI - but that doesn't mean he's resting on his algorithms.

Hinton, a University Professor Emeritus at the University of Toronto, recently released two new papers that promise to improve the way machines understand the world through images or video - a technology with applications ranging from self-driving cars to making medical diagnoses.

"This is a much more robust way to detect objects than what we have at present," Hinton, who is also a fellow at Google's AI research arm, said today at a tech conference in Toronto. 

"If you've been in the field for a long time like I have, you know that the neural nets that we use now - there's nothing special about them. We just sort of made them up."

Hinton's latest approach, detailed in a recent story in Wired magazine, relies on something he calls "capsule networks." Here's how it works: At present, deep learning algorithms must be trained on millions of images before they can reliably distinguish a picture of, say, a cat from something else. In part, that's because the software isn't very good at applying what it's already learned to brand new situations - for example, recognizing a cat that's being viewed from a slightly different angle. Capsule networks, by contrast, can help track the relationship between various parts of an object - in the case of a cat, one example might be the relative distance between its nose and mouth.

Hinton talked about his research, co-authored with Sara Sabour and Nicholas Frosst, at Google's Go North conference, held at Toronto's Evergreen Brick Works. The event brought together researchers and entrepreneurs, including several from U of T. They included: Anna Goldenberg, a scientist in the genetics and genome biology program at SickKids Research Institute and an assistant professor of computer science at U of T; Inmar Givoni, a U of T alumna who is the director of machine learning at startup Kindred.ai; and Jamie Kiros, a PhD candidate in the machine learning program in U of T's department of computer science.

It also featured a fireside chat between Eric Schmidt, the chairman of Google's parent Alphabet, and Prime Minister Justin Trudeau.

Trudeau said it is important for Canada to continue to play a key role in shaping the development of AI technologies because "if we're helping drive it, we will draw benefits from it and minimize the challenges and disruptions as we bring people along."

With his new research, there's little doubt Hinton is doing his part to move the AI ball forward - even if it draws on ideas he's been contemplating for the past 40 years. 

In one of his recently published papers, Hinton's capsule networks matched the accuracy of the best previous techniques when it comes to recognizing hand-written digits, according to Wired. The second paper cut in half the previous error rate on a test that challenges software to recognize objects like toys from different angles, the magazine said.

"What we showed is early days," Hinton cautioned attendees at Go North. 

"It works quite impressively on small datasets. But until it works on large datasets, you shouldn't believe it."

Even so, other researchers are lauding Hinton's efforts.

"It's too early to tell how far this particular architecture will go," Gary Marcus, a professor of psychology at New York University, told Wired. "But it's great to see Hinton out of the rut that the field has seemed fixated on."

Source: utoronto