June 1, 2017
This could be the year both AI and machine learning hit their stride. Of late there has been a lot of noise around these technologies. Google, Microsoft and Amazon are leading the pack with a variety of product that integrate AI and machine learning.
One thread from Google is that they are rethinking how to integrate these two technologies into their grand scheme. That has just shown up in their Google home platform. Google’s smart speaker, powered by Google Assistant, uses deep learning to allow multiple users to share a single Google Home unit.
Another product on the edge is the Google Lens. This is really on the cutting edge because it tries to comprehend what the user is looking at via their smartphone.
The device is not just a camera. It is a vision-based computing platform, much different than a typical lens. In fact, it really is a mini-computer tied to a vision sensing platform. And it is smart. Using AI and deep learning it can actually recognize a subject. For example, if it is being focused on a flower, it actually has the capability to recognize they type of flower and letting the user know.
On the wireless side, it is capable of recognizing wireless devices such as a router. If it is pointed at the router it is capable of connecting to it. There is, of course, a password handshake required before it will connect.
Google is moving from a mobile-first model to AI-first paradigm. One edge-of-the envelope application is AutoML. A new software platform that brings neural networks (NN) to bear on NN development. It will bring about the sci-fi paradigm that allows NNs to build other NNs and become self-aware (isn’t that the premise of the Skynet system in the Terminator movies)? A bit futuristic but certainly the end result of AI and NNs coming together may well be the beginning of self-aware machines (although I doubt we are anywhere close to Skynet yet). Google is releasing a new NN API as well.
And, on the wireless front, Google is developing an AI-powered Gmail reply platform that will run on iOS and Android. The platform will use AI to collect user data and make suggestions for replies to the email. If, for example, it senses a date or time request, it links to the user’s data to offer some possible windows where the user isn’t busy or has something else scheduled. While this isn’t a particularly sophisticated app for machine learning, it learns from the features the user uses. The more the features gets used, the more “instinctive” the responses become.
One could argue that this is simply a database search with the more frequent data tagged for first recall. But it is more than that, really. The algorithms are fuzzy and decisions are made on more than just statistics.
Google is on the edge, here. While some of this may seem unsophisticated. Remember that only recently have the hardware resources scaled to allow AI functionality to become part of smaller, lighter and lower power systems.