Google I/O 2018: Did Google AI Just Pass the Turing Test?

B. Shimmin

Summary Bullets:

  • At Google I/O this week, Sundar Pichai walked attendees through a number of impressive implementations of AI, one of which showed how Google Assistant could book a haircut and make a dinner reservation via an unnervingly convincing conversation between human and machine.
  • What happens, then, if that assistant eventually learns how to pass itself off as you?

You know it’s spring when the cherry blossoms appear in force, the birds start singing in unison, and Google CEO Sundar Pichai takes the stage at Google I/O and nonchalantly demonstrates some new bit of technology that simultaneously manages to amaze and terrify. I’m talking about Google Duplex, an interesting blend of natural language understanding (NLU), deep learning (DL), and text-to-speech technology designed to do one thing: use AI to emulate at least one half of an actual human conversation.

During Sundar’s demonstration, Google Duplex, which was running as a part of Google Assistant, made a couple of phone calls, first arranging for a haircut. Using nothing more than a simple, spoken instruction – “Make me a haircut appointment on Tuesday morning anytime between 10 and 12pm” – Google Duplex dialed up a supposedly unsuspecting human at the hair salon and proceeded to negotiate for a time slot. The technology even went so far as to clarify the type of haircut desired, presumably drawing upon the contextual awareness of the user’s past haircut appointments.

The second demonstration concerned the arrangement of a dining reservation, and it was much more impressive in the fact that the call did not go according to plan. To begin, the again supposedly unsuspecting human did not accurately hear Google Duplex’ original request, which prompted the software to clarify things. It did so with a great deal of patience (more than I would have shown) and I might add with a very nuanced use of audible social cues such as the word “ummmm” to connote a sense of patience in correcting the human’s failure to listen to the original request.

Equally impressive, in that second demo, the Google Duplex AI learned that the restaurant would not accept a dining reservation for the desired date/time. So, what did it do? It reasoned that the next logical step would be to find out if the restaurant would be too busy at that time for a walk-in visit, ascertaining as much via some more ‘off-script’ dialog.

Obviously, computer scientist Alan Turing foresaw all of this back in 1950 when he proposed the Turing test, a means of evaluating whether or not a machine could pass for a human in conversation. From what I saw on stage yesterday, Google Duplex would pass this test with flying colors. It would not be the first computer program operating in a very specific (that is constrained) domain to do so. Yet, what’s clearly a step forward here is that Google Duplex can do so using the spoken language as opposed to just text on a screen.

If this sounds ominous, buckle up. Google is likely to take the idea of an AI masquerading as a human even further later this year, when it rolls out an iteration of the Google Assistant voice that replicates the tenor and tone of American R&B singer John Legend. The default female voice used by Google Assistant right now is actually based on a real person (code-named Holly), but she had to record a massive number of words and phrases in order to sound moderately convincing (okay, moderate is a gracious depiction of what is unmistakably a computer voice in the way words and phrases are crammed together).

Using yet more AI (Google Wavenet), Sundar demonstrated at the show how John Legend could fluidly and convincingly tell you about your upcoming appointments. It’s not a great leap to assume that he could also call on your behalf to book a haircut. It’s also not a huge leap to image that your next Google Assistant voice could be none other than your own. Plying some DL invented by the Google DeepMind group, Wavenet freed Mr. Legend from having to spend hours and hours in front of a microphone in order to have his voice digitized for conversational purposes.

This is where the promise and peril of AI really comes into play. By operationalizing AI to such a degree that you could plausibly replace Google Assistant’s voice with your own (possible with Google Wavenet) and by creating AI algorithms actually capable of passing the Turing test audibly (already available with Google Duplex), we, as a society, will find ourselves facing some interesting questions.

Will technologies like Google Duplex and Wavenet turn us all into the thing we hate perhaps most in this world, e.g., robo dialers? Will it replace human interaction? Will it somehow further destroy our sense of truth and undermine our trust in one another (hello, is this a computer I’m speaking to)? And more concretely, what are the legal ramifications of a piece of software that can use opaque (forever unknowable) DL algorithms to negotiate for legally binding contracts on our behalf?

What happens when the computer sitting in the palm of your hand right now can convincingly masquerade as you?

What do you think?

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.