
George Dahl
George Dahl received his Ph.D. from the University of Toronto under the supervision of Geoff Hinton, where he worked on deep learning approaches to problems in speech recognition, computational chemistry, and natural language text processing, including some of the first successful deep acoustic models. He has been a research scientist at Google on the Brain team since 2015. His research focuses on highly flexible models that learn their own features, end-to-end, and make efficient use of data and computation for supervised, unsupervised, and reinforcement learning. In particular, he is interested in applications to linguistic and perceptual data as well as chemical, biological, and medical data.
Google Scholar profile
Authored Publications
Sort By
Google
A mobile-optimized artificial intelligence system for gestational age and fetal malpresentation assessment
Ryan Gomes
Bellington Vwalika
Chace Lee
Angelica Willis
Joan T. Price
Christina Chen
Margaret P. Kasaro
James A. Taylor
Elizabeth M. Stringer
Scott Mayer McKinney
Ntazana Sindano
William Goodnight, III
Justin Gilmer
Benjamin H. Chi
Charles Lau
Terry Spitz
Kris Liu
Jonny Wong
Rory Pilgrim
Akib Uddin
Lily Hao Yi Peng
Kat Chou
Jeffrey S. A. Stringer
Shravya Ramesh Shetty
Communications Medicine (2022)
A Loss Curvature Perspective On Training Instability in Deep Learning
Justin Gilmer
Behrooz Ghorbani
Ankush Garg
Behnam Neyshabur
David Cardoze
ICLR (2022)
Adaptive Gradient Methods at the Edge of Stability
Behrooz Ghorbani
David Cardoze
Jeremy Cohen
Justin Gilmer
Naman Agarwal
Shankar Krishnan
NeuRIPS 2022 (2022) (to appear)
Machine learning guided aptamer discovery
Ali Bashir
Geoff Davis
Michelle Therese Dimon
Qin Yang
Scott Ferguson
Zan Armstrong
Nature Communications (2021)
Which Algorithmic Choices Matter at Which Batch Sizes? Insights From a Noisy Quadratic Model
Guodong Zhang
James Martens
Sushant Sachdeva
Chris Shallue
Roger Grosse
2019 Conference on Neural Information Processing Systems (2019)
Measuring the Effects of Data Parallelism on Neural Network Training
Chris Shallue
Jaehoon Lee
Jascha Sohl-dickstein
Journal of Machine Learning Research (JMLR) (2018)
Peptide-Spectra Matching with Weak Supervision
Sam Schoenholz
Sean Hackett
Laura Deming
Eugene Melamud
Navdeep Jaitly
Fiona McAllister
Jonathon O'Brien
Bryson Bennett
Daphne Koller
arXiv (2018)
Relational inductive biases, deep learning, and graph networks
Peter Battaglia
Jessica Blake Chandler Hamrick
Victor Bapst
Alvaro Sanchez
Vinicius Zambaldi
Mateusz Malinowski
Andrea Tacchetti
David Raposo
Adam Santoro
Ryan Faulkner
Caglar Gulcehre
Francis Song
Andy Ballard
Justin Gilmer
Ashish Vaswani
Kelsey Allen
Charles Nash
Victoria Jayne Langston
Chris Dyer
Nicolas Heess
Daan Wierstra
Matt Botvinick
Yujia Li
Razvan Pascanu
arXiv (2018)
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Tachard Passos
Robert Ormandi
Geoffrey Hinton
ICLR (2018)