At a Glance

Founded2004
Renewal dates2008, 2014
Members38
SupportersGeoffrey Hinton Céline and Jacques Lamarre
PartnersBrain Canada Foundation through the Canada Brain Research Fund
Facebook
Google Inc.
Inria
Disciplines
Computer science, including artificial intelligence & machine learning; neuroscience; bioinformatics & computational biology

How do we understand intelligence and build intelligent machines?

Computers are faster and more powerful than ever before. But they still can’t think the way people do. The program in Learning in Machines & Brains (formerly known as Neural Computation & Adaptive Perception) is revolutionizing the field of artificial intelligence, and creating computers that think more like us – that can recognize faces, understand what is happening in a picture or video, and comprehend the actual meaning of language. The result will be computers that are not only powerful but intelligent, and that will be able to do everything from conduct a casual conversation to extract meaning from massive databases of information.

Our unique approach

The CIFAR program has shaken up the field of artificial intelligence by pioneering a technique called “deep learning,” which is now routinely used by Internet giants like Google and Facebook. A decade ago, CIFAR took a risk on researchers who wanted to revive interest in neural networks, a computer technique inspired by the human brain. CIFAR brought together computer scientists, biologists, neuroscientists, psychologists and others, and the result was rich collaborations that have propelled artificial intelligence research forward.

unsupervised_icml2012_cat_and_face
How Google sees you and your cat. These “optimal stimuli” for both human and cat faces resulted from training a deep learning network on more than 10 million pictures

Why this matters

Increased processing power and the availability of big data sets are making computers more powerful and useful. Yet computers still face challenges as they try to deal with humans and with the real world, including everyday tasks like understanding written and spoken speech and recognizing faces and objects, or even more interestingly, answering questions about all kinds of documents, communicating with humans, or using reasoning to solve problems.

Computers that are better at understanding and learning from the real world could revolutionize medicine, industry, transportation, and our day-to-day lives. Already, CIFAR researchers have used deep learning to identify previously unknown genetic contributors to conditions such as autism. Soon, computers could learn to drive cars and trucks safely and reliably, or detect the first hint of a major epidemic from public health records and Facebook posts. Computers could also become better at interacting with people. Talking to a computer could become as easy as talking to another person.

CIFAR fellows and advisors are working on artificial intelligence at top technology companies, including Google, Facebook, and Baidu. Deep learning techniques have already revolutionized image understanding and speech recognition, and they continue to set records in artificial intelligence benchmarks such as the ImageNet 1000.

In depth

The fundamental objective of the program is to understand the principles behind natural and artificial intelligence, and to uncover mechanisms by which learning can cause intelligence to emerge. The work builds on artificial neural network research that began as early as the 1950s, when researchers built computers that could respond to training, adjusting the “firing” of artificial neurons until the system had learned to respond appropriately to a pattern.

But after a surge in interest in the 1980s, the approach was largely abandoned and replaced by interest in other forms of machine learning.

However Geoffrey Hinton, a researcher at the University of Toronto, thought neural networks still held promise, and brought together like-minded researchers in the CIFAR program.

The problem that the traditional pattern recognition approaches had was that they required engineers to manually design algorithms that extract appropriate features from datasets that could then be processed by conventional machine learning algorithms.

The deep learning systems Hinton and others created, by contrast, consist of layers of non-linear stages, all of them trainable, each layer taking the output of the previous one and adding a level of abstraction. More abstract representations of data tend to be more useful, since they represent greater semantic content divorced from the low-level details of the data. Working together these layers can learn an entire task, from the raw data to the final classification.

deep.network
A deep learning network takes in raw information, such as values for individual pixels, in the top input layer, and processes it through one or more hidden layers, with each layer adding a further level of abstraction

The recent discoveries in deep learning are only the first pieces of the puzzle. The next challenge is to develop powerful unsupervised learning processes that can take advantage of the vast quantities of data that have not been previously labeled by humans. This sort of learning is similar to human learning in which individuals learn to recognize patterns as young children, and later learn the names of the objects and concepts they can now recognize.

Despite the successes, even simple animals are still better at information processing and perception than current computers. The program has identified a number of challenges, any one of which will transform the field of artificial intelligence.

  • Disentangling the underlying factors of variation

Complex data arise from the rich interaction of many sources. These factors interact in a complex web that can complicate AI-related tasks such as object classification. If we could identify and separate out these factors we would have almost solved the learning problem. The most robust approach to feature learning is to disentangle as many factors as possible, discarding as little information about the data as is practical.

  • The challenge of scaling up

Processing speed continues to improve, as do the size of data sets available for training. However, although tasks like recognizing common objects in ordinary images are solved to the point that computers are roughly as good at them as humans, other problems like scene understanding, reinforcement learning, or natural language understanding are still at the beginning. For computers to approach true artificial intelligence, they will need to be able to handle much larger number of parameters that are beyond the capabilities of today’s models.

  • The challenge of reasoning

Current deep learning algorithms are good at compiling knowledge into actionable representations and decision-making or predictive functions, but not at making general deductions and adapting quickly from new observations. The challenge is to use deep learning to perform sequential inference – drawing conclusions based on premises or observations, through a sequence of reasoning steps . Research on capturing semantics from natural language texts may provide an answer. The meaning of a document can be understood as a logically connected set of facts or hypotheses, and techniques that allow natural language processing may be useful for general reasoning ability as well.

 

Selected papers

Hinton, G. E., Osindero, S. and Teh, Y. (2006). “A fast learning algorithm for deep belief nets.” Neural Computation, 18, pp 1527-1554. [pdf]

Y. Bengio and P. Lamblin and D. Popovici and H. Larochelle, “Greedy Layer-Wise Training of Deep Networks,” Neural Information Processing Systems Proceedings (2006)

Salakhutdinov, R. and Hinton, G., “Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure,” Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 412-419 (2007)

Graves, A., Mohamed, A., Hinton, G. E., “Speech Recognition with Deep Recurrent Neural Networks,” 39th International Conference on Acoustics, Speech and Signal Processing, Vancouver (2013)

Yann LeCun, Yoshua Bengio and Geoffrey Hinton. (2015). “Deep Learning.” Nature, 521, pp 436–444. [abstract]

READ 2016’s ANNUAL UPDATE 

 

Fellows & Advisors

Photo of Yoshua Bengio

Yoshua Bengio

Program Co-Director

Yoshua Bengio's current interests include fundamental questions on deep learning, the geometry of generalization in high-dimensional spaces, biologically inspired learning algorithms, and challenging applications of statistical machine learning in artificial…

Read More >

Photo of Yann LeCun

Yann LeCun

Program Co-Director

Yann LeCun's research interests include computational and biological models of learning and perception. One of his goals is to understand the principles of learning in the brain, and to build…

Read More >

Fellows

Francis Bach

Senior Fellow

Inria

France

Aaron Courville

Fellow

Université de Montréal

Canada

Nando De Freitas

Senior Fellow

University of Oxford

United Kingdom

James DiCarlo

Associate Fellow

Massachusetts Institute of Technology

United States

Rob Fergus

Senior Fellow

New York University

United States

David J. Fleet

Senior Fellow

University of Toronto

Canada

Brendan J. Frey

Senior Fellow

University of Toronto

Canada

Surya Ganguli

Associate Fellow

Stanford University

United States

Zaid Harchaoui

Associate Fellow

Inria

France

Aapo Johannes Hyvärinen

Associate Fellow

University of Helsinki

Finland

Hugo Larochelle

Fellow

Université de Sherbrooke

Canada

Honglak Lee

Associate Fellow

University of Michigan

United States

Christopher Manning

Associate Fellow

Stanford University

United States

Roland Memisevic

Fellow

Université de Montréal

Canada

Andrew Ng

Associate Fellow

Stanford University

United States

Bruno Olshausen

Senior Fellow

University of California, Berkeley

United States

Joëlle Pineau

Senior Fellow

McGill University

Canada

Doina Precup

Senior Fellow

McGill University

Canada

Blake Richards

Associate Fellow

University of Toronto

Canada

Ruslan Salakhutdinov

Fellow

Apple AI Research, Carnegie Mellon University

United States

Mark Schmidt

Associate Fellow

University of British Columbia

Canada

Eero Simoncelli

Associate Fellow

New York University

United States

Josef Sivic

Senior Fellow

Inria

France

Ilya Sutskever

Associate Fellow

OpenAI

United States

Richard Sutton

Associate Fellow

University of Alberta

Canada

Antonio Torralba

Associate Fellow

Massachusetts Institute of Technology

United States

Pascal Vincent

Associate Fellow

Université de Montréal

Canada

Yair Weiss

Senior Fellow

The Hebrew University of Jerusalem

Israel

Max Welling

Senior Fellow

University of Amsterdam

Netherlands

Christopher K.I. Williams

Senior Fellow

The University of Edinburgh

United Kingdom

Richard Zemel

Senior Fellow

University of Toronto

Canada

Advisors

Léon Bottou

Advisor

Facebook AI Research

France

Geoffrey Hinton

Advisor

Google, University of Toronto

Canada

Pietro Perona

Advisor

California Institute of Technology

United States

Bernhard Schölkopf

Advisory Committee Chair

Max Planck Institute for Intelligent Systems

Germany

Terrence J. Sejnowski

Advisor

Salk Institute for Biological Studies

United States

Sebastian Seung

Advisor

Princeton University

United States

Global Scholars

Graham Taylor

CIFAR Azrieli Global Scholar

University of Guelph, The Vector Institute

Canada

Joel Zylberberg

CIFAR Azrieli Global Scholar

University of Colorado Denver

United States

Ideas Related to Learning in Machines & Brains

News | Learning in Machines & Brains

Computers recognize memorable images

Why do some images stay fixed in our memories, while others quickly fade away? Researchers have developed a deep learning...

News | Learning in Machines & Brains

Computers learn by playing with blocks

When an infant plays with wooden blocks, it’s not just playing – it’s also learning about the physical world by...

News

Neural networks advances improve machine translation

Breakthroughs in machine translation using neural networks have improved our ability to translate words and sentences between many languages. Researchers...

News | Quantum Information Science

‘Quantum repeaters’ could extend secure communication

Scientists have proposed a new method to transmit secure quantum communication across longer distances. The next generation of cryptography has...

News | Learning in Machines & Brains

Computer model generates automatic captions for images

CIFAR fellows have created a machine learning system that generates captions for images from scratch, scanning scenes and putting together...