Artificial Intelligence has created a global industry that touches on every business sector imaginable — from improved security of our banking to innovation in farming, education, law enforcement, health care, space exploration and customer service.
How Google sees you and your cat. These “optimal stimuli” for both human and cat faces resulted from training a deep learning network on more than 10 million pictures
The Learning in Machines & Brains program played a major part in the revolution by examining how artificial neural networks could be inspired by the human brain, and developing the powerful technique of deep learning.
Now the program is expanding our understanding of the fundamental computational and mathematical principles that enable intelligence through learning, whether in brains or in machines.
Current AI systems are limited in their ability to understand the world around us. This program attacks those limitations by going back to basic questions rather than focusing on short-term technological advances. This fundamental approach has the dual benefit of improving the engineering of intelligent machines and explaining intelligence.
A deep learning network takes in raw information, such as values for individual pixels, in the top input layer, and processes it through one or more hidden layers, with each layer adding a further level of abstraction.
Hinton, G. E., Osindero, S. and Teh, Y. (2006). “A fast learning algorithm for deep belief nets.” Neural Computation, 18, pp 1527-1554. PDF
Y. Bengio and P. Lamblin and D. Popovici and H. Larochelle, “Greedy Layer-Wise Training of Deep Networks,” Neural Information Processing Systems Proceedings (2006). PDF
Salakhutdinov, R. and Hinton, G., “Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure,” Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 412-419 (2007). PDF
Graves, A., Mohamed, A., Hinton, G. E., “Speech Recognition with Deep Recurrent Neural Networks,” 39th International Conference on Acoustics, Speech and Signal Processing, Vancouver (2013). PDF
Yann LeCun, Yoshua Bengio and Geoffrey Hinton. (2015). “Deep Learning.” Nature, 521, pp 436–444. Abstract