Hugo Larochelle

I currently lead the Google Brain group in Montreal. My main area of expertise is deep learning. My previous work includes unsupervised pretraining with autoencoders, denoising autoencoders, visual attention-based classification, neural autoregressive distribution models. More broadly, I’m interested in applications of deep learning to generative modeling, reinforcement learning, meta-learning, natural language processing and computer vision.

Previously, I was Associate Professor at the Université de Sherbrooke (UdeS). I also co-founded Whetlab, which was acquired in 2015 by Twitter, where I then worked as a Research Scientist in the Twitter Cortex group. From 2009 to 2011, I was also a member of the machine learning group at the University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. I obtained my Ph.D. at the Université de Montréal, under the supervision of Yoshua Bengio.

My academic involvement includes associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), member of the editorial board of the Journal of Artificial Intelligence Research (JAIR) and program chair for the International Conference on Learning Representations (ICLR) of 2015, 2016 and 2017. I’ve also been an area chair for many editions of the NIPS and ICML conferences.

Finally, I have a popular online course on deep learning and neural networks, freely accessible on YouTube.

Google Publications

Previous Publications

  •   

    Domain-Adversarial Training of Neural Networks

    Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky

    Journal of Machine Learning Research, vol. 17 (2016)

  •   

    MADE: Masked Autoencoder for Distribution Estimation

    Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle

    Proceedings of the 32nd International Conference on Machine Learning (2015)

  •   

    An autoencoder approach to learning bilingual word representations

    Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, Amrita Saha

    Advances in Neural Information Processing Systems 27 (2014)

  •   

    Guest editors' introduction: Special section on learning deep architectures

    Samy Bengio, Li Deng, Hugo Larochelle, Honglak Lee, Ruslan Salakhutdinov

    IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 35 (2013), pp. 1795-1797

  •   

    Practical bayesian optimization of machine learning algorithms

    Jasper Snoek, Hugo Larochelle, Ryan P. Adams

    Advances in Neural Information Processing Systems 25 (2012)

  •  

    Conditional Restricted Boltzmann Machines for Structured Output Prediction

    Volodymyr Mnih, Hugo Larochelle, Geoffrey E. Hinton

    UAI (2011), pp. 514-522

  •   

    The Neural Autoregressive Distribution Estimator

    Hugo Larochelle, Iain Murray

    Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (2011)

  •  

    Learning to combine foveal glimpses with a third-order Boltzmann machine

    Hugo Larochelle, Geoffrey E. Hinton

    NIPS (2010), pp. 1243-1251

  •   

    Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion

    Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol

    Journal of Machine Learning Research, vol. 11 (2010)

  •   

    Exploring strategies for training deep neural networks

    Hugo Larochelle, Yoshua Bengio, Jérôme Louradour, Pascal Lamblin

    Journal of Machine Learning Research, vol. 1 (2009)

  •   

    Classification using discriminative restricted boltzmann machines

    Hugo Larochelle, Yoshua Bengio

    Proceedings of the 25th International Conference on Machine Learning (2008)

  •   

    Extracting and composing robust features with denoising autoencoders

    Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol

    Proceedings of the 25th International Conference on Machine Learning (2008)

  •   

    Zero-data learning of new tasks

    Hugo Larochelle, Dumitru Erhan, Yoshua Bengio

    Proceedings of the 23rd AAAI Conference on Artificial Intelligence (2008)

  •   

    An empirical evaluation of deep architectures on problems with many factors of variation

    Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, Yoshua Bengio

    Proceedings of the 24th International Conference on Machine Learning (2007)

  •   

    Greedy layer-wise training of deep networks

    Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle

    Advances in neural information processing systems 19 (2007)