Google Brain Team's Mission

Our mission in the Brain team is to make machines intelligent and improve people’s lives. We do this through deep learning research, a subfield of machine learning, focusing on building highly flexible models that learn their own features, end-to-end, and make efficient use of data and computation. This is practically useful for the world: we’ve already deployed our deep learning models across many Google products, and are exploring applying this approach to many different problems, including in spaces like healthcare. Our expertise in systems also allows us to build tools to accelerate ML research and unlock its practical value for everyone.

Researchers on the Brain team have the freedom to set their own research agendas and determine their own level of engagement with existing products, choosing between doing more basic, methodological research or more applied research as necessary to produce the most compelling results. Because many of the advances we develop today may take years to become useful, the team as a whole maintains a portfolio of projects across this spectrum. Our philosophy is that making substantive progress on hard applications can help drive and sharpen the research questions we study, and in turn, scientific breakthroughs can spawn entirely new applications that would be unimaginable today.

In 2012, our colleagues Alfred Spector, Peter Norvig, and Slav Petrov published a blog post and paper explaining Google’s hybrid approach to research. While some Google teams do take a hybrid approach, the Brain team’s approach is built upon research freedom -- since our researchers set their own agendas, much of our team focuses specifically on advancing the state-of-the-art in machine learning research. In 2017 alone, our work has been recognized at several of the top machine learning conferences: for NIPS 2017, 23 papers have been accepted (a 42% acceptance rate compared to 20% average for the conference), and earlier this year 19 submissions were accepted at ICML (61% compared to 25% conference average). For ICLR, 20 papers were accepted (56% compared to 40% conference average).

Our researchers regularly collaborate with researchers at external institutions, with fully 1/3rd of our papers in 2017 having one or more cross-institutional authors. Additionally, we host collaborators from academic institutions to enhance our own research and strengthen our connection to the external scientific community.

We firmly believe that openly disseminating research is critical to a healthy exchange of ideas, which in turn helps drive progress and innovation in the field as a whole. As such, we publish our research regularly in top machine learning venues, and we release our tools, like TensorFlow, as open source projects. Beyond simply disseminating our work, we also emphasize training new ML experts through internships and the Google AI Residency program.

We also believe in the importance of clear and understandable explanations of the concepts in modern machine learning. Distill.pub is an online technical journal providing a forum for this purpose, launched by Brain team members Chris Olah and Shan Carter. TensorFlow Playground is an in-browser experimental venue created by the Google Brain team’s visualization experts to give people insight into how neural networks behave on simple problems, and PAIR’s deeplearn.js is an open source WebGL-accelerated JavaScript library for machine learning that runs entirely in your browser, with no installations and no backend.

By virtue of being part of Google and Alphabet, the Google Brain team has resources and access to projects impossible to find elsewhere. Our broad and fundamental research goals allow us to collaborate closely with many different teams to make unique contributions to products across the company. In 2017, we collaborated with our platforms team to develop a machine learning accelerator ASIC chip, Google Cloud TPUs, and our collaborations with product teams have successfully deployed our technology in numerous products, including Search, Translate, Photos, and DeepMind’s AlphaGo system.

We’re currently looking for researchers and software engineers. If you are interested in joining our team, apply here!