Deep learning pioneer LeCun to head artificial intelligence at Facebook

News Learning in Machines & Brains 18.12.2013

The CIFAR Senior Fellow in Learning in Machines & Brains (formerly known as Neural Computation & Adaptive Perception) is a pioneer in deep learning, a technique that uses artificial neural networks to allow machines to learn to recognize patterns in everything from pictures to spoken language to handwriting. Along with others in CIFAR’s LMB program (LMB Program Director Geoffrey Hinton recently landed a similar job at Google) LeCun is at the forefront of a resurgence in artificial intelligence research. A lot of the credit, he says, goes to CIFAR. CIFAR Managing Editor Kurt Kleiner talked to LeCun about his work and his new job.

What are you going to be doing at Facebook?

The ambition of the lab I’m going to be building at Facebook is to make significant progress toward artificial intelligence. There are signs that we are making progress already, and that this is going to accelerate. The interest of large companies in AI is really focused today on deep learning. And deep learning was basically a CIFAR-funded conspiracy.

Ten years ago Geoffrey Hinton, Yoshua Bengio and myself got together and basically decided that we should rekindle the interest of the machine learning and AI community in representation learning, which is a technical term, but which is really the problem that deep learning is attempting to solve.

What is deep learning, anyway?

The brains of humans and animals are “deep,” in the sense that each action is the result of a long chain of synaptic communications, which represents many layers of processing. Deep learning attempts to do something similar in machines. We create networks of neurons many layers deep that can learn to represent features in the world, whether they are words or pictures or something else. We think that understanding deep learning will not only enable us to build more intelligent machines, but will also help us understand human intelligence.

As I understand it this is an outgrowth of neural networks research which at one point had been pretty much declared a dead end. How has deep learning become such a success?

What people got interested in in the ‘80s was this idea of being able to learn internal representations of objects or of the perceptual world. But it was very difficult at the time. We didn’t have big data sets and we had slow computers. It’s not like it was a complete failure. But it’s true that the machine learning community sort of lost interest in it.

That’s where the LMB program at CIFAR played a very important role starting about 10 years ago. We had to convince the community that it was worth the effort to work on this. Around 2005 or 2006 there was a bit of a groundswell around deep learning techniques called unsupervised learning. A few people like Andrew Ng and other people made some very interesting contributions. That caused the community to get interested again in this idea of learning representations through unsupervised learning.
 In a typical neural network, each layer learns to abstract some feature of the data, then feeds its results forward to another layer, which performs a higher level of abstraction. (Photo: NYU).

The attraction for unsupervised learning is that you set it loose, you let it do most of the learning by itself, rather than put in all this work up front?

Right. If you think about how animals and babies learn they learn by themselves the notion of objects, the properties of objects, without being told specifically what those objects are. It’s only later that we give names to objects. So most of the learning takes place in an unsupervised manner. The supervised part comes later when you give names to the objects. You show a child picture books and you show the child a picture of an elephant and now the child know what an elephant is, based only on a single picture of an elephant.

About two years ago Geoff set out to help various companies to use deep learning for speech recognition and sent some of his students as interns there. And more recently he and I worked on image recognition. And the methods that seem to work on this are the old ones, the purely supervised methods. So the stuff people use in industry now — Google, Microsoft, IBM — are the purely supervised neural nets very similar to those used 20 years ago. But they work now because they are much bigger than the ones we used then because our computers are faster, and they’re trained on gigantic data sets. That’s what makes them work.

There are still advantages to deep learning as such, though?

Deep learning really designates the architectures that are trained. As long as they have more than three layers we call them deep, and whether they are trained with supervised or unsupervised techniques is really secondary. So there are a few applications where unsupervised pre-training followed by supervised training works very well. In many cases if you have large data sets supervised training works fine.

We know that in the long run unsupervised learning will be a big part of the answer. But we don’t have a magic bullet for it yet.

What’s the attraction of the Facebook offer?

It’s a combination of things. First is the ambitious goal of this lab, which is to make significant progress toward artificial intelligence. Second is the fact that it’s going to be outward looking. The people there are going to be publishing papers, they’ll be a part of the research community. A nice thing about a company like Facebook is that if you come up with a better way of understanding natural language or image recognition, it’s not like you have to create a whole business around it. There is a direct pipeline to applications which is not something that exists in every company. There is no question there will be direct impact on the business. But really the goal is very ambitious and long term.

And it’s not every day you’re given the opportunity to create an AI research lab from scratch. Particularly since the location is essentially right across the street from my lab at NYU. So it’s a very exciting opportunity.

Leave a Comment

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Related Ideas

Announcement | News

CIFAR Research Workshops: Call for Proposals

For more than three decades, CIFAR’s global research programs have connected many of the world’s best minds – across borders...

News | Learning in Machines & Brains

A biologically plausible technique for training a neural net

Artificial intelligence researchers have made advances in recent years by designing neural networks that are based on the structure of...

News | Azrieli Program in Brain, Mind & Consciousness

Proof for a psychedelic state of mind

Those famous musicians and artists who touted psychedelic drugs as a key component of their creativity may have been onto...

Video | Bio-inspired Solar Energy | Institutions, Organizations & Growth | Learning in Machines & Brains | CIFAR Azrieli Global Scholars

Introducing the Future of Research

Three members of the inaugural group of CIFAR Azrieli Global Scholars spoke to an invited audience about their work and...

News | Learning in Machines & Brains

Government renews, increases support for CIFAR, invests in AI Strategy

In the federal budget last month, there were two pieces of good news for CIFAR. First, Ottawa renewed and increased...

News | Azrieli Program in Brain, Mind & Consciousness

Magnetic Stimulation Boosts Memory

A technique that uses magnetic pulses to stimulate nerve cells in the brain was used in the lab to improve...