Computer scientist Jeff Dean is an icon at Google. The senior fellow co-designed and co-implemented five generations of Google's crawling indexing and query retrieval systems.
He co-designed Google's first advertising serving system, its initial version of Google's AdSense and the development of ad selection algorithms based on the contents of pages.
Six years ago he co-founded the Google Brain project, which for years has been at the cutting edge of developments in artificial intelligence.
Fast forward to 2017, and his AI work is centrestage. These days AI is part of everything Google does with chief executive Sundar Pichai having declared Google an "AI-first company".
Dean was the key attraction at the company's AI conference held in Tokyo last week. He says AI and machine learning, where computers "learn" without being explicitly programmed, are being used to improve seven of Google's products, each with more than a billion users.
Studies in artificial intelligence have been part of computer science for decades, but mostly beyond the reach of technology to implement effectively. The recent ability to collect huge amounts of data with sensors, collate that data by moving it across fast-fibre networks, process it with fast computers and store it in massive databases has changed that.
AI systems become knowledgeable through machine learning. If you want an AI system to understand the difference between a dog and a cat, you feed it examples of each so it can learn the difference by analysing the visual patterns.
The common computer models used to make these predictions are known as neural networks. Dean says Google began its research into neural networks in 2012. It released its open-source software library for machine intelligence called TensorFlow in 2015
AI was now a part of many Google systems. You can search for people and objects in all type of settings in Google photos. The improved capability of Google Translate when translating between 97 language pairs is due to machine learning of language.
Computer vision and machine learning are used to convert satellite imagery into viable maps with Google Maps. Google is piloting a program in 25 US cities where it makes predictions about parking at a journey's destination in Maps. AI can now read emails, and make predictions about the responses you might make, and offer them as alternatives.
Google's Lens app lets you point your camera at a street sign in, say, Japanese, and read it in English.
Dean says machine learning had been used to automatically create captions for more than 1 billion YouTube videos in 10 languages. All of this recently rolled out AI capability is adding a new layer of sophistication to Google software and hardware.
But he says there are still challenges to overcome, including the shortage of people with machine learning expertise. Google has undertaken a massive retraining of its engineers internally to help address it.
"We've gone from less than a thousand people at Google who were trained in machine learning to more than 18,000 people today," he says.
Still, there are insufficient people with ML skills in the community. He says Google will put its internal machine learning course online early next year.
In his interview with The Australian, Dean detailed Google's effort to build artificial intelligence systems that create and test machine learning models themselves, with little human expertise needed. These "autoML" systems herald the age of machines creating their own learning models for new challenges.
"In a normal machine workflow, you'd have some dataset that you had prepared training a model. I'm going to design a neural network architecture. That's what a human machine learning expert would do," he says.
With autoML, a computer would create and test hundreds of machine learning models itself.
"We've been working on this from a research perspective for a year and a half and we are now to the point where we are using it internally for real machine learning tasks that we care about," he told The Australian.
The drawback is the time factor. An autoML system might train and test say, 20,000 models with each model itself requiring training for many hours.
"You do this in groups of 10 or 50 so you get feedback about which of those 50 work the best and steer the next 10 or 50 towards ones that seem more promising."
Google conference in Tokyo also looked at AI applications for health where there is a plethora of applications. Projects include the application of machine learning to diagnosing eye diseases suffered by diabetics, skin cancers and lung cancer screening.
Google also is delving into the world of machine learning and genetics. For example by studying people's genetic sequences and their long term fate, AI systems could make predictions not only about people's immediate health, but also their longer term susceptibility to certain diseases, well before any traces of them appear.
"One part of our health care research group is actually focused on the use of machine learning and AI for genetics," he says.
Chris Griffith attended Google's artificial intelligence conference in Tokyo courtesy of Google.
Source: The Australian