The CIFAR program has shaken up the field of artificial intelligence by pioneering a technique called “deep learning,” which is now routinely used by Internet giants like Google and Facebook. A decade ago, CIFAR took a risk on researchers who wanted to revive interest in neural networks, a computer technique inspired by the human brain. CIFAR brought together computer scientists, biologists, neuroscientists, psychologists and others, and the result was rich collaborations that have propelled artificial intelligence research forward.
How Google sees you and your cat. These “optimal stimuli” for both human and cat faces resulted from training a deep learning network on more than 10 million pictures
Increased processing power and the availability of big data sets are making computers more powerful and useful. Yet computers still face challenges as they try to deal with humans and with the real world, including everyday tasks like understanding written and spoken speech and recognizing faces and objects, or even more interestingly, answering questions about all kinds of documents, communicating with humans, or using reasoning to solve problems.
A deep learning network takes in raw information, such as values for individual pixels, in the top input layer, and processes it through one or more hidden layers, with each layer adding a further level of abstraction.
Computers that are better at understanding and learning from the real world could revolutionize medicine, industry, transportation, and our day-to-day lives. Already, CIFAR researchers have used deep learning to identify previously unknown genetic contributors to conditions such as autism. Soon, computers could learn to drive cars and trucks safely and reliably, or detect the first hint of a major epidemic from public health records and Facebook posts. Computers could also become better at interacting with people. Talking to a computer could become as easy as talking to another person.
The fundamental objective of the program is to understand the principles behind natural and artificial intelligence, and to uncover mechanisms by which learning can cause intelligence to emerge.
Hinton, G. E., Osindero, S. and Teh, Y. (2006). “A fast learning algorithm for deep belief nets.” Neural Computation, 18, pp 1527-1554. PDF
Y. Bengio and P. Lamblin and D. Popovici and H. Larochelle, “Greedy Layer-Wise Training of Deep Networks,” Neural Information Processing Systems Proceedings (2006). PDF
Salakhutdinov, R. and Hinton, G., “Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure,” Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 412-419 (2007). PDF
Graves, A., Mohamed, A., Hinton, G. E., “Speech Recognition with Deep Recurrent Neural Networks,” 39th International Conference on Acoustics, Speech and Signal Processing, Vancouver (2013). PDF
Yann LeCun, Yoshua Bengio and Geoffrey Hinton. (2015). “Deep Learning.” Nature, 521, pp 436–444. Abstract
Contact the program’s senior director, Kate Geddie