- Machine learning (ML) has historically driven value within analytics projects, helping companies to build better predictive data models through iterative learning.
- But, as showcased at Google I/O last week, an emerging set of experience-driven ML APIs points to enterprise use cases that both build upon and grow beyond the confines of big data.
Dear Google: What have you done with my favorite trade show? What has happened to Google I/O? Sure, the change in venue from Moscone West to the Shoreline Amphitheater was huge. But, that speaks to Google’s collegiate beginnings and stance on corporate responsibility. After all, the Shoreline venue was built to resemble the Grateful Dead’s Steal Your Face logo and rests atop a landfill. That makes sense. What I find jarring, however, is the company’s shift away from sexy reference hardware and new Android OS sweets toward more smoky, heady, cloudy ideas like artificial intelligence (AI) and machine learning (ML).
Fondly now I think back on the heady days when Google used its annual Android developer conference to unveil one groovy toy after another. It was a simple and highly effective equation. Put enticing hardware (and some good software and services, I might add) into the hands of developers, and they’ll build software for that platform. This year, attendees didn’t even get a new Android unveil. Binaries for Android N arrived well in advance of the show! And the greatly anticipated announcement of a merged Chrome OS and Android platform sort of fizzled out in terms of a big reveal during the event… but it is happening, rest assured.
Instead, attendees were shown forthcoming backend services that were in many ways late and reactive, not leading moves by outward appearance anyway. Google showcased a proposed Apple iMessage killer (Google Allo and Duo) and a hopeful Microsoft Cortana killer (Google Assistant). Those in and of themselves were not all that groundbreaking – just more Googleiness. The ‘smart’ deep learning algorithms underlying the user experience of these products (speech recognition, photo tagging, personal assistance, etc.) are important because they point at capabilities that are currently available to all developers. I’m speaking of Google’s TensorFlow technology, a library of deep learning computational graphs, which it contributed to the Apache Foundation last September.
Two weeks ago, Google added its natural language processing (NLP) module to TensorFlow, also adding this same functionality to its recently released Cloud Machine Learning framework on Google Cloud Platform. Now, developers have learning algorithms covering voice, image and text – the three key modes of communication. And recently, the company unveiled its Tensor Processing Unit (TPU), a custom ASIC built specifically to scale deep learning ML processes. Google is clearly leaning hard on its AI prowess. But why?
Well, as it turns out, the libraries within TensorFlow solve the same problems we see within Gmail, Allo, Duo, Assistant, etc. But, they can also solve very different problems. As crazy as it sounds, the same data training algorithms used to identify cats within Google Photo can be applied to identifying patients at risk for diabetic retinopathy. I imagine they could also be put to use in identifying stress fractures in an airplane fuselage or long-term microclimate trends for a given location.
The sky’s the limit, quite literally. That is perhaps why Google CEO Sundar Pichai mentioned during this year’s I/O conference that he thinks we’re moving from a mobile-first world to a world that’s AI-first. Obviously, Google I/O itself has changed, coming in step with Sundar’s vision. All we need now is a set of enterprise-focused libraries and training sets that can be applied at the application level rather than at the level of ML and AI science.