How It Works
How does a computer understand you when you talk to it using everyday language?
Our approach was to use billions of lines of dialogue to teach an AI how real human conversations flow.
Once the AI has learned from that data, it is then able to predict how likely one statement would follow another as a response. In these demos, the AI is simply considering what you type to be an opening statement and looking across a pool of many possible responses to find the ones that would most likely follow.
The technique we're using to teach computers language is called machine learning. Google's Machine Learning Glossary defines machine learning as:
"...a program or system that builds (trains) a predictive model from input data."
What does that mean for us?
Input data: The input data is a billion pairs of statements, where the second statement is a response to the first one.
Predicting: We are predicting the response to a question or a statement. After seeing all those pairs of sentences and responses, the AI learns to identify what a good response might look like.
Model: The trained system that is used for making predictions. After training, our model is able to pick the most likely response from a pool of options.
Talk to Books
In Talk to Books, when you type in a question or a statement, the model looks at every sentence in over 100,000 books to find the responses that would most likely come next in a conversation. The response sentence is shown in bold, along with some of the text that appeared next to the sentence for context.
Mastering Talk to Books may take some experimentation. Although it has a search box, its objectives and underlying technology are fundamentally different than those of a more traditional search experience. It's simply a demonstration of research that enables an AI to find statements that look like probable responses to your input rather than a finely polished tool that would take into account the wide range of standard quality signals. You may need to play around with it to get the most out of it.
Try our sample queries to get a feel for how Talk to Books works. Then play around with your own ideas. Use it to explore topics you are interested in. Part of the fun is coming up with queries that help you discover interesting perspectives and books you may want to read.
Talk to Books is more of a creative tool than a way to find specific answers. In this experiment, we don't take into account whether the book is authoritative or on-topic. The model just looks at how well each sentence pairs up with your query. Sometimes it finds responses that miss the mark or are taken completely out of context.
If Talk to Books isn't finding responses you like, you may get better results by using different words or simply more words. It often does better with full sentences rather than just keywords or short phrases.
Semantris is a word association game that uses this same technology. Each time you enter a clue, the AI looks at all the words in play and chooses the ones it thinks are most related. Because the AI was trained on conversational text spanning a large variety of topics, it is able to make many types of associations.
In Semantris' Arcade, when the AI sorts the list, the most related words are moved to the bottom. In the example above, you can see that it thinks that the word "Moon" is a better conversational response to "Sun" than "Teacher".
Semantris is similar to other word association games where a person gives clues to help their teammate guess the correct words. However, in Semantris, you give your hints to an AI. Because the AI can sometimes have quirky responses, you'll need to experiment with different types of clues to learn how this AI thinks and to earn the highest scores. Try playing with slang, technical terms, pop culture references, synonyms, antonyms, and even full sentences.
Visit For Developers to dive deeper into the technology and use it in your own applications.
We Invite Your Feedback
You may find delightful responses you want to share with us, or have suggestions for improving these demos. You may also see surprising or confusing responses -- or even responses that make you uncomfortable. These are raw research demos. They demonstrate the AI's full capabilities and weaknesses, including how it can reflect human cognitive biases. If you'd like to learn more about bias in language understanding models, visit our For Developers page.
These are imperfect experiments and we are learning from them. We invite your feedback as it helps us improve. Click the exclamation point icon at the top right of each experience to send us your feedback.