Search

Andrew Saxe

12 Andrew Saxe_BW

Appointment

  • CIFAR Azrieli Global Scholar 2020-2022
  • Learning in Machines & Brains

Institution

  • University of Oxford
Department of Experimental Psychology

Country

  • United Kingdom

Education

PhD (Electrical Engineering), Stanford University
MS (Electrical Engineering), Stanford University
BSE (summa cum laude, Electrical Engineering), Princeton University

About

The interactions of billions of neurons ultimately give rise to our thoughts and actions.

Remarkably, much of our behaviour is learned starting in infancy and continuing throughout our lifespan. Andrew Saxe is aiming to develop a mathematical toolkit suitable for analyzing and describing aspects of learning in the brain and mind. His current focus is on the theory of deep learning, a class of artificial neural network models that take inspiration from the brain. Alongside this theoretical work, he develops close collaborations with experimentalists to empirically test principles of learning in biological organisms. 

 

Awards

Wellcome-Beit Prize, Wellcome Trust, 2019

Sir Henry Dale Fellowship, Wellcome Trust & Royal Society, 2019

Robert J. Glushko Outstanding Doctoral Dissertations Prize, Cognitive Science Society, 2016

NDSEG Fellowship, 2010

 

Relevant Publications

Saxe, A. M., McClelland, J. L., & Ganguli, S. (2019). A mathematical theory of semantic development in deep neural networks. Proceedings of the National Academy of Sciences, 116(23), 11537–11546. https://doi.org/10.1073/pnas.1820226116 

Earle, A. C., Saxe, A. M., & Rosman, B. (2018). Hierarchical Subtask Discovery with Non-Negative Matrix Factorization. In Y. Bengio & Y. LeCun (Eds.), International Conference on Learning Representations.

Advani*, M., & Saxe*, A. M. (2017). High-dimensional dynamics of generalization error in neural networks. ArXiv. 

Musslick, S., Saxe, A. M., Ozcimder, K., Dey, B., Henselman, G., & Cohen, J. D. (2017). Multitasking Capability Versus Learning Efficiency in Neural Network Architectures. Annual Meeting of the Cognitive Science Society, 829–834.

 Saxe, A. M., McClelland, J. L., & Ganguli, S. (2014). Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In Y. Bengio & Y. LeCun (Eds.), International Conference on Learning Representations.

 

 

Connect

website