At a Glance

Founded2004
Renewal dates2008, 2014
Members38
SupportersGeoffrey Hinton Céline and Jacques Lamarre Anonymous Donor
PartnersBrain Canada Foundation
Facebook
Google Inc.
Inria
Disciplines
Computer science, including artificial intelligence & machine learning; neuroscience; bioinformatics & computational biology

How do we understand intelligence and build intelligent machines?

Computers are faster and more powerful than ever before. But they still can’t think the way people do. The program in Learning in Machines & Brains (formerly known as Neural Computation & Adaptive Perception) is revolutionizing the field of artificial intelligence, and creating computers that think more like us – that can recognize faces, understand what is happening in a picture or video, and comprehend the actual meaning of language. The result will be computers that are not only powerful but intelligent, and that will be able to do everything from conduct a casual conversation to extract meaning from massive databases of information.

Our unique approach

The CIFAR program has shaken up the field of artificial intelligence by pioneering a technique called “deep learning,” which is now routinely used by Internet giants like Google and Facebook. A decade ago, CIFAR took a risk on researchers who wanted to revive interest in neural networks, a computer technique inspired by the human brain. CIFAR brought together computer scientists, biologists, neuroscientists, psychologists and others, and the result was rich collaborations that have propelled artificial intelligence research forward.

unsupervised_icml2012_cat_and_face
How Google sees you and your cat. These “optimal stimuli” for both human and cat faces resulted from training a deep learning network on more than 10 million pictures

Why this matters

Increased processing power and the availability of big data sets are making computers more powerful and useful. Yet computers still face challenges as they try to deal with humans and with the real world, including everyday tasks like understanding written and spoken speech and recognizing faces and objects, or even more interestingly, answering questions about all kinds of documents, communicating with humans, or using reasoning to solve problems.

Computers that are better at understanding and learning from the real world could revolutionize medicine, industry, transportation, and our day-to-day lives. Already, CIFAR researchers have used deep learning to identify previously unknown genetic contributors to conditions such as autism. Soon, computers could learn to drive cars and trucks safely and reliably, or detect the first hint of a major epidemic from public health records and Facebook posts. Computers could also become better at interacting with people. Talking to a computer could become as easy as talking to another person.

CIFAR fellows and advisors are working on artificial intelligence at top technology companies, including Google, Facebook, and Baidu. Deep learning techniques have already revolutionized image understanding and speech recognition, and they continue to set records in artificial intelligence benchmarks such as the ImageNet 1000.

In depth

The fundamental objective of the program is to understand the principles behind natural and artificial intelligence, and to uncover mechanisms by which learning can cause intelligence to emerge. The work builds on artificial neural network research that began as early as the 1950s, when researchers built computers that could respond to training, adjusting the “firing” of artificial neurons until the system had learned to respond appropriately to a pattern.

But after a surge in interest in the 1980s, the approach was largely abandoned and replaced by interest in other forms of machine learning.

However Geoffrey Hinton, a researcher at the University of Toronto, thought neural networks still held promise, and brought together like-minded researchers in the CIFAR program.

The problem that the traditional pattern recognition approaches had was that they required engineers to manually design algorithms that extract appropriate features from datasets that could then be processed by conventional machine learning algorithms.

The deep learning systems Hinton and others created, by contrast, consist of layers of non-linear stages, all of them trainable, each layer taking the output of the previous one and adding a level of abstraction. More abstract representations of data tend to be more useful, since they represent greater semantic content divorced from the low-level details of the data. Working together these layers can learn an entire task, from the raw data to the final classification.

deep.network
A deep learning network takes in raw information, such as values for individual pixels, in the top input layer, and processes it through one or more hidden layers, with each layer adding a further level of abstraction

The recent discoveries in deep learning are only the first pieces of the puzzle. The next challenge is to develop powerful unsupervised learning processes that can take advantage of the vast quantities of data that have not been previously labeled by humans. This sort of learning is similar to human learning in which individuals learn to recognize patterns as young children, and later learn the names of the objects and concepts they can now recognize.

Despite the successes, even simple animals are still better at information processing and perception than current computers. The program has identified a number of challenges, any one of which will transform the field of artificial intelligence.

  • Disentangling the underlying factors of variation

Complex data arise from the rich interaction of many sources. These factors interact in a complex web that can complicate AI-related tasks such as object classification. If we could identify and separate out these factors we would have almost solved the learning problem. The most robust approach to feature learning is to disentangle as many factors as possible, discarding as little information about the data as is practical.

  • The challenge of scaling up

Processing speed continues to improve, as do the size of data sets available for training. However, although tasks like recognizing common objects in ordinary images are solved to the point that computers are roughly as good at them as humans, other problems like scene understanding, reinforcement learning, or natural language understanding are still at the beginning. For computers to approach true artificial intelligence, they will need to be able to handle much larger number of parameters that are beyond the capabilities of today’s models.

  • The challenge of reasoning

Current deep learning algorithms are good at compiling knowledge into actionable representations and decision-making or predictive functions, but not at making general deductions and adapting quickly from new observations. The challenge is to use deep learning to perform sequential inference – drawing conclusions based on premises or observations, through a sequence of reasoning steps . Research on capturing semantics from natural language texts may provide an answer. The meaning of a document can be understood as a logically connected set of facts or hypotheses, and techniques that allow natural language processing may be useful for general reasoning ability as well.

 

Selected papers

Hinton, G. E., Osindero, S. and Teh, Y. (2006). “A fast learning algorithm for deep belief nets.” Neural Computation, 18, pp 1527-1554. [pdf]

Y. Bengio and P. Lamblin and D. Popovici and H. Larochelle, “Greedy Layer-Wise Training of Deep Networks,” Neural Information Processing Systems Proceedings (2006)

Salakhutdinov, R. and Hinton, G., “Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure,” Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 412-419 (2007)

Graves, A., Mohamed, A., Hinton, G. E., “Speech Recognition with Deep Recurrent Neural Networks,” 39th International Conference on Acoustics, Speech and Signal Processing, Vancouver (2013)

Yann LeCun, Yoshua Bengio and Geoffrey Hinton. (2015). “Deep Learning.” Nature, 521, pp 436–444. [abstract]

Fellows & Advisors

Photo of Yoshua Bengio

Yoshua Bengio

Program Co-Director

Yoshua Bengio's current interests include fundamental questions on deep learning, the geometry of generalization in high-dimensional spaces, biologically inspired learning algorithms, and challenging applications of statistical machine learning in artificial…

Read More >

Photo of Yann LeCun

Yann LeCun

Program Co-Director

Yann LeCun's research interests include computational and biological models of learning and perception. One of his goals is to understand the principles of learning in the brain, and to build…

Read More >

Fellows

Francis Bach

Senior Fellow

Inria

France

Aaron Courville

Fellow

Université de Montréal

Canada

Nando De Freitas

Senior Fellow

University of Oxford

United Kingdom

James DiCarlo

Associate Fellow

Massachusetts Institute of Technology

United States

Rob Fergus

Senior Fellow

New York University

United States

David J. Fleet

Senior Fellow

University of Toronto

Canada

Brendan J. Frey

Senior Fellow

University of Toronto

Canada

Surya Ganguli

Associate Fellow

Stanford University

United States

Zaid Harchaoui

Associate Fellow

Inria

France

Aapo Johannes Hyvärinen

Associate Fellow

University of Helsinki

Finland

Hugo Larochelle

Fellow

Université de Sherbrooke

Canada

Honglak Lee

Associate Fellow

University of Michigan

United States

Christopher Manning

Associate Fellow

Stanford University

United States

Roland Memisevic

Fellow

Université de Montréal

Canada

Andrew Ng

Associate Fellow

Stanford University

United States

Bruno Olshausen

Senior Fellow

University of California, Berkeley

United States

Joëlle Pineau

Senior Fellow

McGill University

Canada

Blake Richards

Associate Fellow

University of Toronto

Canada

Ruslan Salakhutdinov

Fellow

University of Toronto

Canada

Mark Schmidt

Associate Fellow

University of British Columbia

Canada

Eero Simoncelli

Associate Fellow

New York University

United States

Josef Sivic

Senior Fellow

Inria

France

Ilya Sutskever

Associate Fellow

OpenAI

United States

Richard Sutton

Associate Fellow

University of Alberta

Canada

Antonio Torralba

Associate Fellow

Massachusetts Institute of Technology

United States

Pascal Vincent

Associate Fellow

Université de Montréal

Canada

Yair Weiss

Senior Fellow

The Hebrew University of Jerusalem

Israel

Max Welling

Senior Fellow

University of Amsterdam

Netherlands

Christopher K.I. Williams

Senior Fellow

The University of Edinburgh

United Kingdom

Richard Zemel

Senior Fellow

University of Toronto

Canada

Advisors

Léon Bottou

Advisor

Facebook AI Research

France

Geoffrey Hinton

Advisor

Google, University of Toronto

Canada

Pietro Perona

Advisor

California Institute of Technology

United States

Bernhard Schölkopf

Advisory Committee Chair

Max Planck Institute for Intelligent Systems

Germany

Terrence J. Sejnowski

Advisor

Salk Institute for Biological Studies

United States

Sebastian Seung

Advisor

Princeton University

United States

Global Scholars

Graham Taylor

CIFAR Azrieli Global Scholar

University of Guelph

Canada

Joel Zylberberg

CIFAR Azrieli Global Scholar

University of Colorado Denver

United States

Program Timeline

Learning in Machines & Brains Program launches

CIFAR launches the program in Learning in Machines & Brains

New methods for probing how the brain learns faces

CIFAR Senior Fellow Hugh Wilson (York University) and Associate Fellow Frances

Greedy algorithms train deeper

CIFAR Senior Fellow Yoshua Bengio (Université de Montréal) and collaborators

Studying stills reveals motion

Collaborations between CIFAR Senior Fellows David Fleet, Aaron Hertzmann, Richard

A smart system for online image search

CIFAR Senior Fellow Yair Weiss (The Hebrew University of Jerusalem)

Speech recognition soars

CIFAR program Director Geoffrey Hinton’s (University of Toronto) research group

Neural net parses language

CIFAR Associate Andrew Ng's (Stanford University) group develops an impressive

Reading the human pose

CIFAR Associate Rob Fergus (New York University) pioneers a novel

A method for disentangling factors of variation in images

CIFAR Senior Fellow Yoshua Bengio, CIFAR Associate Pascal Vincent (both

Google buys Geoffrey Hinton’s startup

The success and commercial potential of deep neural networks prompts

CIFAR names Geoffrey Hinton a Distinguished Fellow

CIFAR awards Geoffrey Hinton (University of Toronto) the title of Distinguished

Credit: CIFAR Program Director Geoffrey Hinton, circa 2004

2004

Learning in Machines & Brains Program launches

CIFAR launches the program in Learning in Machines & Brains (formerly known as Neural Computation & Adaptive Perception) under the direction of Geoffrey Hinton (University of Toronto). The program aims to unlock the mystery of how our brains convert sensory stimuli into information and to recreate human-style learning in computers.

Credit: SONY

Sony’s Aibo dog, which can see shapes thanks to David Lowe’s software

2004

Algorithm boosts image recognition

CIFAR Fellow David Lowe (University of British Columbia) develops Scale-invariant Feature Transformation (SIFT), an algorithm which allows a computer to identify an element in an image regardless of its size. The research paper becomes one of the most widely-cited papers in machine vision literature and SONY uses the algorithm to boost vision in its Aibo robotic dog.

Credit: Image by Hugh Wilson

This diagram shows how researchers combine the measurements of faces that are different shapes and sizes into another face using Principal Component Analysis (PCA)

2005

New methods for probing how the brain learns faces

CIFAR Senior Fellow Hugh Wilson (York University) and Associate Fellow Frances Wilkinson (Max Planck Institute for Intelligent Systems) study how the brain learns to recognize faces using geometrical measures of features, such as the distance of the eyes and the length of the nose. Using a sample set of different faces, they combine the measurements into another face using Principal Component Analysis (PCA). In one study, they find that subjects who study the initial set of faces remember them well, but they also remember the face generated using PCA, even though they have never seen it before. This sheds light on the mechanisms brains use to compute visual information. The researchers expand upon this knowledge using functional MRI (magnetic resonance imaging) to show that neurons in the area of the brain specialized for recognizing faces, the fusiform face area, respond selectively to deviations along particular principal components.

Credit: The Berkeley Segmentation Dataset

Machine learning algorithms have difficulty detecting the edges of images without sharp boundaries

2006

Greedy algorithms train deeper

CIFAR Senior Fellow Yoshua Bengio (Université de Montréal) and collaborators build upon the procedure outlined by CIFAR Program Director Geoffrey Hinton and CIFAR Fellow Ruslan Salakhutdinov (both University of Toronto) in their seminal work. They show that both supervised and unsupervised learning of images and text done by pre-training one layer at a time with a "greedy" algorithm helps to train deeper networks. Greedy algorithms break down complex problems into simple solutions to which they can find the optimal result before moving on to the next smaller problem. CIFAR Senior Fellow Yann LeCun (New York University) and collaborators exploit a similar idea in the context of convolutional networks, using a variant of sparse coding as the unsupervised learning algorithm for each layer.

Credit: “A Fast Learning Algorithm for Deep Belief Nets,” Neural Computation, vol. 18, no. 7 (July 2006)

This figure shows 49 handwritten numbers that the neural network guessed correctly after training

2006

A fast learning neural network

CIFAR Program Director Geoffrey Hinton, CIFAR Fellow Ruslan Salakhutdinov (both University of Toronto) and collaborators show for the first time that they can train a deep neural network with many hidden layers using unsupervised pre-training on one layer at a time as long as the top two layers form an associative memory; that is, memory based on similarity. These findings, published in Science, as well as an algorithm for training deep belief nets published the same month in Neural Computation, are viewed as seminal contributions to the field of machine learning, particularly with regard to language processing.

2007

Studying stills reveals motion

Collaborations between CIFAR Senior Fellows David Fleet, Aaron Hertzmann, Richard Zemel (all University of Toronto) and Nikolaus Troje (Queen’s University), CIFAR Associate Michael J. Black (Max Planck Institute for Intelligent Systems) and Program Director Geoffrey Hinton (University of Toronto) lead to significant advances in our ability to extract animate motion from images. They develop methods for tracking motion based on the shape of the human body using multiple sillhouettes. In addition, researchers use a category of statistical model known as generative models to teach computers to "see" human motion, even when people are moving against a cluttered background. Researchers also use realistic physics-based models for tracking people in video in order to predict when — during the process of walking, for example — a person's feet touch the ground. Using these complex analyses of motion, they can ensure their own computerized models of walking are physically plausible. These advances in motion models and inference methods may help us understand how humans perceive the gestures and actions of people and other animals, and they will also enable myriad applications in areas as diverse as markerless motion capture (which doesn't require subjects to wear special equipment to track them), biomechanics and video surveillance.

The video below, corresponding to the paper by Brubaker et al., shows how the researchers characterized the way a person’s lower body moves as they walk, using the physics of how their feet contact the ground to analyze their movements.

2007

Data analysis crosses disciplines

CIFAR Senior Fellow Brendan Frey (University of Toronto), influenced by a number of program members, develops a new algorithm called “affinity propagation” which is used to analyze data in a variety of fields, including computer vision, genomics, biology, communication networks and physics. Affinity propagation outperforms previously-described methods for many types of data. A user-friendly web application using this algorithm is launched in June 2007 and is accessed more than 100,000 times by more than 3,000 unique users from around the world.

This video by the developers of the affinity propagation algorithm illustrates how it works:

2008

A smart system for online image search

CIFAR Senior Fellow Yair Weiss (The Hebrew University of Jerusalem) and CIFAR Associate Rob Fergus (New York University) collaborate to produce a system that can search millions of images downloaded from the Internet for good examples of a class of objects such as “Japanese Spaniel.” The algorithm, which grew out of previous work on image retrieval by other members of the Learning in Machines & Brains program (formerly known as Neural Computation & Adaptive Perception), only needs a few labeled examples of each class. It then propagates this class information to similar images in an efficient way.

Credit: Delbert Dueck

Semantic hashing allows retrieval of visually similar images at an unprecedented rate

2008

Breakthrough in image retrieval

Collaboration between CIFAR fellows leads to a breakthrough in the ability to retrieve images that resemble a query image. CIFAR Fellow Ruslan Salakhutdinov, Program Director Geoffrey Hinton (both University of Toronto), CIFAR Associates Antonio Torralba (Massachusetts Institute of Technology) and Rob Fergus (New York University) and CIFAR Senior Fellow Yair Weiss (The Hebrew University of Jerusalem) develop a very fast retrieval method called “semantic hashing.” Instead of using only image captions for this retrieval, machine learning methods are used to convert each image in a very large database into a short binary code. These codes contain a lot of information about the semantic content of the image and they allow very fast matching. The researchers show that if there are enough images, it is always possible to find one that is very similar to a given query image. Contrary to prior expectations, this allows remarkably good object recognition. For more information, please see this Google Tech Talk by CIFAR Program Director Geoffrey Hinton.

Credit: iStock

Deep networks provide much better results for voice recognition than previous methods

2009

Speech recognition soars

CIFAR program Director Geoffrey Hinton’s (University of Toronto) research group applies the deep learning algorithms to the problem of recognizing units of language, known as phonemes. They produce results significantly better than any previous method for speaker-independent recognition. Deep networks significantly outperform the highly-tuned system developed previously by others even when they are only given two per cent as much training data. This research opens up collaboration interest with IBM and Microsoft.

Credit: Apple Autostitch

These demo images show how Autostitch weaves together several images to make one panorama

2009

Research boosts Apple image stitching

CIFAR Senior Fellow David Lowe (University of British Columbia) spins off a company from his university research with some of his students and postdoctoral fellows. The company, Cloudburst Research Inc., transfers research results to applications in mobile devices. Their first product provides automatic image panorama stitching on Apple iPhone devices using an algorithm that allows computers to identify the same object from different angles and under different lighting conditions. The Autostitch application sells over 300,000 copies to end users. Lowe credits the Learning in Machines & Brains program (formerly known as Neural Computation & Adaptive Perception) with helping to advance his research.

Credit: David Fleet

Using data about the body's movements, the researchers produce a realistic mimic of walking

2009

Model teaches computers the motion of walking styles

CIFAR fellows develop several models of human motion to improve computers’ ability to discriminate between different styles of walking. One type of model developed by CIFAR Senior Fellow David Fleet and Aaron Hertzmann (both University of Toronto) uses Gaussian processes to derive a low-dimensional representation of complex motions. Another type, developed by Program Director Geoffrey Hinton’s (University of Toronto) team, uses detailed data about how people’s joint angles change as they move, taking into account various walking styles. The model effectively learns to imitate the changes associated with walking styles, drawing the interest of animation companies.

Credit: Richard Socher et al.

This illustration shows how the recursive neural network parses images and sentences into components and then merges them together to learn the whole

2010

Neural net parses language

CIFAR Associate Andrew Ng's (Stanford University) group develops an impressive way of parsing both images and sentences to produce tree structures that capture the context of a search term as well as semantic information about its use in language. The key idea is to use a neural net that takes rich vector representations of two parts and produces a representation of the whole plus a score that says how well the two parts fit together. This method outperforms other methods at a variety of important text and image processing tasks. It is also far more human-like than most image processing methods.

Credit: Google Maps/Street View

Google Streetview uses convolutional neural networks to blur car licence plates, faces and addresses

2010

Machine learning reaches industry

CIFAR fellows develop variations of deep learning that have valuable industrial applications. CIFAR Senior Fellow Yann LeCun’s (New York University) method mimics the hierarchical way the visual cortex is wired. Google begins to use these convolutional neural networks, or ConvNets, to identify and blur faces and car licence plates in its Streetview application. The Defense Advanced Research Projects Agency (DARPA), the research arm of the U.S. Defense Department, uses them to detect large obstacles from far away. CIFAR Fellow Yoshua Bengio (Université de Montréal) develops a type of deep learning in which feature vectors can be classified into a few different classes using extremely few training examples. The technology leads to a collaboration with Ubisoft, a major developer of computer games that employs more than 9,000 people worldwide, and to the establishment of a five-year industrial research chair at Université de Montréal. For more information, please see this story about Senior Fellow Yann LeCun’s research in The Economist.

Credit: Visual Dictionary / MIT

This is a visualization of 53,464 nouns arranged by meaning from the 80 million tiny images dataset. The CIFAR-10 and CIFAR-100 datasets are labelled subsets of this larger resource

2010

CIFAR image datasets improve object recognition

CIFAR Associates Rob Fergus (New York University), Antonio Torralba (Massachusetts Institute of Technology) and colleagues collect 80 million colour images from the web and put them in a standard format suitable for machine learning. With many millions of parameters, the data sets provide an excellent resource for vision systems to learn on, so long as the learning procedures do not require accurate labels. However, to evaluate the accuracy of object recognition, the data does require accurate labels. Therefore, a large number of undergraduate students at the University of Toronto hand-label two subsets of the 80 million images with funding from CIFAR. These accurately labeled subsets, known as CIFAR-10 and CIFAR-100, become a standard benchmark in computer vision research.

Credit: Rob Fergus

The video for C-Mon and Kypski's song "More or Less" incorporated crowd-sourced webcam footage from fans imitating certain poses, which allowed the researchers to parse repeated human poses from many people on many backgrounds

2011

Reading the human pose

CIFAR Associate Rob Fergus (New York University) pioneers a novel way of learning metric embeddings of images for identifying human poses. His project uses web-sourced imitations of a rock video by the Dutch band C-mon & Kypski as training data. From these videos it is possible to learn to recognize different human poses, whilst ignoring different backgrounds and lighting conditions. The resulting embedding can be used to help find people in images. In addition, it outperforms leading face detectors on a dataset of webcam images.

Credit: ConvNetJS Denoising Autoencoder demo

This online demonstration from the Stanford University website shows how a denoising auto-encoder can learn and reconstruct handwritten digits from the Mixed National Institute of Standards and Technology dataset

2011

Developing better auto-encoders

CIFAR Associate Pascal Vincent and CIFAR Senior Fellow Yoshua Bengio (both University of Montreal) and their students and postdoctoral fellows at the University of Montreal gain a much deeper understanding of the artificial neural networks known as auto-encoder modules that learn a compression representation of data. Researchers use auto-encoders to pre-train deep neural networks. The researchers develop variants called “denoising” and “contracting” auto-encoders which are much better at generalizing to new data. Using their new methods, they win two international machine learning competitions.

Credit: Robert Galbraith / Reuters

Joe Belfiore, vice president of the operating system group at Microsoft, holds mobile phone featuring the new Windows 8.1 operating system with enhanced voice search at a conference in San Francisco, California April 2, 2014

2011

Changing the way speech recognition is done

Microsoft, Google and IBM start to use deep neural networks developed by CIFAR fellows at the University of Toronto, the University of Montreal and New York University for speech recognition, rather than the traditional statistical method for speech recognition called Gaussian mixture models. These deep neural networks are significantly more accurate. The Android 4.1 uses a deep neural net as its acoustic model. Microsoft also deploys this new approach for voice search.

Credit: REACH magazine

Neural nets learn faces a bit at a time — first as a set of light and dark pixels, then as simple shapes, then features, and finally, whole faces

2012

A method for disentangling factors of variation in images

CIFAR Senior Fellow Yoshua Bengio, CIFAR Associate Pascal Vincent (both University of Montreal) and collaborators studied how computers could learn to recognize facial expressions for people in different poses and with different features. They showed, using the Toronto Face Database of 8,052 face images, that they could separate out facial expressions from pose and face structure, which enabled them to beat the previous state of the art benchmark in recognizing expressions.

Credit: Image courtesy of Scyfer

CIFAR Associate Max Welling

2012

Regularising huge models

The “dropout” method introduced by CIFAR Program Director Geoffrey Hinton (University of Toronto) and collaborators allows artificial neural networks to operate more like the brain. While statisticians use models with only a few parameters and many training examples, the brain has tens of thousands more synapses than training examples. Neural networks mimic the multitude of synapse connections in the stages of learning. Furthering this concept, CIFAR Associate Max Welling (University of Amsterdam) and colleagues devise a method that interpolates between the Markov Chain Monte Carlo (MCMC) method and stochastic gradient optimization, the latter of which examines only a small portion of the data provided rather than the entire set. This allows the methods to learn very efficiently. Welling and his co-authors win the best paper award at the 2012 International Conference on Machine Learning.

Credit: Image courtesy of Body Labs and bodyhub.com

Body Labs’ software uses a body scan to build a 3D model that can be posed

2012

Research sparks 3D model company

Body Labs, a new company that sells virtual human models to clothing and video game designers, grows out of research by CIFAR Associate Michael J. Black of the NCAP program and a founding director at the Max Planck Institute for Intelligent Systems in Germany. Black developed a model for making virtual people from 3D body scans. His work has potential application in online shopping or even helping treat people with body image disorders like anorexia.

Credit:

Photo by John Guatto / University of Toronto

CIFAR Program Director Geoffrey Hinton with Ilya Sutskever and Alex Krizhevsky

2013

Google buys Geoffrey Hinton’s startup

The success and commercial potential of deep neural networks prompts Google to purchase CIFAR Senior Fellow Geoffrey Hinton’s start-up company, DNN research. His work is applied to create a much improved photo search feature in Google+.

Credit: Courtesy of Department of Computer Science, University of Toronto

This heatmap shows the results of the researchers' application of deep recurrent neural networks to speech recognition

2013

Using deep recurrent neural networks for speech recognition

CIFAR Program Director Geoffrey Hinton (University of Toronto) and Alex Graves (Deepmind Technologies), a CIFAR Global Scholar Alumni, achieve a breakthrough in the application of deep recurrent neural networks to speech recognition. These are networks of neurons that send each other feedback signals; for example, the human brain is a deep recurrent neural network. Their internal connections make them much better than other neural networks at processing random signals. The first published results of the new system demonstrate a decisive improvement over the current best in speech recognition, which has already been dramatically enhanced in the past few years by the application of deep neural networks.

Credit: Photo by John Guatto / University of Toronto

CIFAR Program Director Geoffrey Hinton with Ilya Sutskever and Alex Krizhevsky

2013

Google buys Geoffrey Hinton’s startup

The success and commercial potential of deep neural networks prompts Google to purchase CIFAR Senior Fellow Geoffrey Hinton’s start-up company, DNN research. His work is applied to create a much improved photo search feature in Google+.

Credit: Photo by Josh Valcarcel

CIFAR Senior Fellow Yann LeCun was hired by Facebook to head a newly created research laboratory with the long-term goal of bringing about major advances in Artificial Intelligence

2013

Facebook hires Yann LeCun

Social networking company Facebook hired CIFAR Senior Fellow Yann LeCun (New York University) to lead its new artificial intelligence laboratory. LeCun is at the forefront of a resurgence in artificial intelligence research. He is a pioneer in deep learning, which he calls "a CIFAR-funded conspiracy." For more information, please see this story in CIFAR's newsletter, News & Ideas.

Credit: Image courtesy of Josh Valcarcel / Wired

CIFAR Distinguished Fellow Geoffrey Hinton, professor at the University of Toronto and researcher for Google

2014

CIFAR names Geoffrey Hinton a Distinguished Fellow

CIFAR awards Geoffrey Hinton (University of Toronto) the title of Distinguished Fellow in recognition of his many contributions to the program in Learning in Machines & Brains (formerly known as Neural Computation & Adaptive Perception). Hinton joined CIFAR as a fellow in the Artificial Intelligence & Robotics program in 1987, an appointment that led him to move to a position at the University of Toronto, where he continued his research on deep learning and neural networks. He later proposed a new program that became LMB, which he directed from 2004 until January 2014, when he began working part-time at Google and part-time at the University of Toronto.

Ideas Related to Learning in Machines & Brains

Learning in Machines & Brains | Recommended

Scientific American | Springtime for AI: The Rise of Deep Learning

By Yoshua BengioJune 1 2016 Computers generated a great deal of excitement in the 1950s when they began to beat...

Learning in Machines & Brains | News

Computers recognize memorable images

Why do some images stay fixed in our memories, while others quickly fade away? Researchers have developed a deep learning...

Learning in Machines & Brains | Video

CIFAR – Artificial Intelligence

CIFAR – Artificial Intelligence from CIFAR on Vimeo. CIFAR Distinguished Fellow Geoffrey Hinton, the world’s leading authority on a branch...

Learning in Machines & Brains | News

Computers learn by playing with blocks

When an infant plays with wooden blocks, it’s not just playing – it’s also learning about the physical world by...

Learning in Machines & Brains | Research Brief

A machine learning system generates captions for images from scratch

Caption generation is a fundamental problem of artificial intelligence, one that distinguishes human intelligence – our ability to construct descriptions...