Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...Full Bio
Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...
Data science is the big draw in business schools
1045 days ago
7 Effective Methods for Fitting a Liner
1055 days ago
3 Thoughts on Why Deep Learning Works So Well
1055 days ago
3 million at risk from the rise of robots
1055 days ago
Top 10 Hot Artificial Intelligence (AI) Technologies
Google weaves AI and machine learning into core products at IO 2017
At the opening keynote for Google I/O 2017, company leaders detailed how it is integrating machine learning into its architecture, products, and services.
On Wednesday, at the 2017 Google I/O developer conference in Mountain View, CA, Google CEO Sundar Pichai said that the company is rethinking all of its products with a renewed focus on machine learning and artificial intelligence (AI).
One recent example of the company's use of machine learning is in Google Home, the company's smart speaker powered by Google Assistant, which uses deep learning to allow multiple users to share a single Google Home unit. Pichai also announced that the machine learning-driven Smart Reply feature is coming to Gmail on iOS and Android as well.
One of the big announcements at I/O was Google Lens, a set of vision-based computing capabilities that seeks to understand what a user is looking at with their smartphone's camera, and help them take action based on that information. For example, a user can take a picture of a flower, and Lens will tell the user the kind of flower it is, Pichai said. Users will also be able to point their phone at a router, and it will connect them based on the given password. Google Lens will initially be rolled out to Google Assistant and Google Photos in the coming weeks.
At last year's I/O, Pichai spoke about how computing was moving from mobile-first to AI-first, and that theme continued in 2017. Pichai said that Google is rethinking its computational architecture to build "AI-first data centers."
Another infrastructure initiative is the Cloud Tensor Processing Units (TPU), optimized to boost machine learning workloads for both training and inference. The Cloud TPU is available on the Google Compute Engine now.
Google is also building out a collection of its AI efforts and teams in Google.ai, a central point for the firm's research, tools, and applied AI. Additionally, Pichai said, Google is working on AutoML, a new software that uses neural networks to build other neural networks.
Google Assistant, Google's AI-based voice assistant, is now available on the iPhone, and is coming to Android TV as well. Users can also now type requests to Google Assistant on their smartphone, if they don't want to be overheard.
A new Google Assistant SDK will allow companies to build Google Assistant into whatever product they are building. At a glance, this seems like Google's answer to Amazon Lex. Additionally, Actions on Google will now support transactions like payments, identity management, account creation, and more.
Google Home itself got four new updates: Proactive Assistance, hands-free calling, free Spotify integration, and visual responses on smartphones and TVs.
Machine learning will also power much of Google Photos. At I/O 2017, Google leaders announced machine learning-based features like Suggested Sharing, Shared Libraries, and the ability to make physical photo books out of a user's images in Google Photos.
Dave Burke, Google's vice president of engineering for Android, also took the stage at I/O to give an update on Android O. The upcoming Android OS version is based on two core features: Fluid Experiences and Vitals.
Fluid Experiences will bring picture-in-picture, notification dots, enhanced autofill for Android apps, and Smart Text Selection for easier copying and pasting, Burke said. Additionally, Google is launching TensorFlow Lite, a lightweight version of its open source machine library for applications, and a new neural network API.
Vitals is focused on keeping Android phones secure and healthy, working to maximize power and performance. One new feature called Google Play Protect scans apps to look for malicious code, Burke said. The Play Console Dashboard will analyze apps for battery drain, their tendency to cause crashes, and how they affect the speed of the UI. Additionally, Android Studio Profilers is another new tool that helps developers understand how their app is affecting the phone's experience.
The coding language Kotlin is now officially supported by Android. And Google is launching a lightweight version of the OS called Android Go that optimizes the latest release of Android to work smoothly on entry-level devices, along with new lightweight apps and a custom version of the Play Store as well.
Standalone virtual reality (VR) headsets are coming soon to Google Daydream through partnerships between Google and manufacturers like HTC and Lenovo. Google is also launching new VR and AR experiences for education deployments as well. Read More