Google launched ML Kit last year at I/O with the mission to simplify Machine Learning for everyone.
ML Kit has enabled thousands of developers to create new and exciting experiences. More importantly, user engagement due to features powered by ML Kit is growing more than 60% per month!
Google has been working with the following companies and their apps: Air Cognizer, Adidas, Dominos, IKEA, VSCO, Gradeup School, Fishbrain, WPS Office, Turbotax, and many others.
There were many new and exciting features introduced at I/O this past May. The Object Detection and Tracking API allows you to identify the prominent object in an image and track it in real-time. Pair this API with a cloud solution like Google’s Product Search API to create a real-time visual search experience. Pass an image or video stream to the API and it will return coordinates of the primary object as well as a coarse classification, and will then provide a handle for tracking the object’s coordinates over time.
Several partners, including Adidas, have built experiences powered by this API.
The On-device Translation API allows you to use the same offline models that support Google Translate to provide fast text translations into 58 languages. This API can be used to communicate with those who do not understand a specific language or to translate user-generated content.
Using AutoML Vision Edge, you can now create custom image classification models tailored to your specific needs, like identifying types of foods or distinguishing between varied species of animals. Just upload your training data to the Firebase console and use Google’s AutoML to build a custom TensorFlow Lite model to run locally on your user’s device. Alternatively, you can use Google’s open source app, which makes the process easier than collecting your own training datasets.
You can learn more at https://g.co/mlkit or visit Firebase to get started.
Originally posted by Google Developer Blog: https://developers.googleblog.com/2019/05/new-ml-kit-features-easily-bring.html