Natural Language Understanding

For Machine Intelligence to truly be useful, it should excel at tasks that humans are good at, such as natural language understanding. The Google Brain team’s language understanding research focuses on developing learning algorithms that are capable of understanding language to enable machines to translate text, answer questions, summarize documents, or conversationally interact with humans.

Our research in this area started with neural language models and word vectors. Our work on word vectors, word2vec - which learns to map words to vectors, was opensourced in 2013 and has since gained widespread adoption in the research community and industry. Our work on language models has also made great strides (see this and this) in improving state-of-art prediction accuracies.

We also conduct fundamental research that leads to a series of advances using neural networks for end-to-end language (or language-related) tasks such as translation, parsing, speech recognition, image captioning and conversation modeling. The underlying technology is the seq2seq framework, which is also now used in SmartReply (and other products) at Google and is opensourced in TensorFlow.

Our recent research highlights are also in the areas of semi/unsupervised learning, multitask learning, learning to manipulate symbols and learning with augmented logic and arithmetic.


Representative publications by Google Brain team members

Selected publications

Publications by year


Current Google Brain team members who work on this area