There are several updates by Google on firebase, its mobile development platform. Notably, Google is launching new capabilities for ML Kit, the machine learning SDK that comes with ready-to-use, on-device, and cloud-based APIs with support for custom models. The new capabilities, launched in beta, include an On-device Translation API, an Object Detection & Tracking API, and AutoML Vision Edge.
Google announced the updates at Google I/O, the annual developer event where Google typically makes several AI-related announcements.
With the On-device Translation API, app developers can get access to offline models for fast, dynamic translation of a text into 58 languages. It uses the same ML models that support Google Translate. The Object Detection and Tracking API lets your app locate and track, in real-time, the most prominent object in a live camera feed. IKEA, for instance, used the new API to create a mobile app experience where users can take photos of household items to find the product or similar items in the retailer's online catalog.
Meanwhile, with AutoML Vision Edge, app developers can create a custom image, classification models. For example, you could build an app that identifies different types of food or different species of animals. Developers can upload their training data to the Firebase console and use Google's AutoML technology to build a custom TensorFlow Lite model to run locally on the end user's device.
In addition to the ML Kit enhancements, Google announced several other Firebase updates. For instance, Google is expanding, in beta, Firebase Performance Monitoring to web apps. It's also introducing a new audience builder in Google Analytics for Firebase.
Google is also adding support for Collection Group queries within its fully-managed NoSQL database, Cloud Firestore. They're also releasing a new Cloud Functions emulator.
Developers in Firebase can also benefit with new, configurable velocity alerts in Firebase Crashlytics as well as improvements to Firebase Test Lab.