Having a machine learning agent interact with its environment requires true unsupervised learning, skill acquisition, active learning, exploration and reinforcement, all ingredients of human learning that are still not well understood or exploited through the supervised approaches that dominate deep learning today.

Our goal is to improve robotics via machine learning, and improve machine learning via robotics. We foster close collaborations between machine learning researchers and roboticists to enable learning at scale on real and simulated robotic systems.

We're exploring how to teach robots transferrable skills, by learning in parallel across many manipulation arms in our one-of-a-kind lab purpose-built for machine learning research:

We're teaching robots to predict what happens when they move objects around, in order to learn about the world around them and make better, safer decisions without supervision, and we are sharing our training data publicly to help advance the state of the art in this field. We're also bringing advances in deep learning to the exciting and demanding world of self-driving cars to improve their safety and reliability.

Representative publications by Google Brain team members


  • Robot Arm Grasping and Pushing This dataset contains recordings of 650k robotic grasp attempts and 59k object pushing interactions. Each grasp attempt is annotated with the success or failure of the grasp, and each push includes video and joint angle sequences. Data was collected using real robots and several hundred different objects.

Current Google Brain team members who work on this area