Photo of A biologically plausible technique for training a neural net

A biologically plausible technique for training a neural net

by Kurt Kleiner News Learning in Machines & Brains 17.05.2017

Artificial intelligence researchers have made advances in recent years by designing neural networks that are based on the structure of the human brain. But their work isn’t just leading to better computers – it is also leading to better understanding about how the neural networks in our own brains work.

Learning in Machines & Brains Co-Director Yoshua Bengio and his student Benjamin Scellier, both of the Université de Montréal, invented a new way of training artificial neural networks that might help theoretical neuroscientists figure out how natural neural networks learn and correct errors.

They call the technique “equilibrium propagation,” and it is an alternative to a widely used technique for training neural networks called backpropagation.

To train an artificial neural network with backpropagation you first present it with an input, propagate the signal forward in the network, then examine the output and compare it to the output you would ideally like to have gotten. The difference between the two is called the error. In the second step you “backpropagate” the error through the network, making adjustments to individual neurons and synapses in an attempt to get the output closer to the ideal. By repeating the process many times you gradually reduce the error in the neural network, bringing it closer to the output you want.

These illustrations represent a network in which equilibrium propagation works, left, with many recurrent and symmetric connections among nodes. On the right is a layered network of the kind in which backpropagation is useful.

But the way backpropagation works in artificial neural networks has never seemed biologically plausible, says Scellier. One reason among many others is that it requires a special computational circuit for the backpropagation of errors, which seems unlikely to have arisen in an evolved organism, based on current knowledge in neuroscience.

The new equilibrium propagation technique is possible using a single circuit, and a single type of calculation, says Scellier. “Our model requires only one type of neuronal dynamics to perform both inference and backpropagation of errors. The computations executed in the network are based on a standard neuron model and a standard form of synaptic plasticity.” That means it’s more likely to be similar to an actual process that evolved in the brain, and could provide the beginnings of an answer to how biological neural circuits learn.

That’s one of the major goals of the work, Scellier says.

“Today, the gap between neuroscience and the neural networks used in artificial intelligence is pretty big. Our approach is to start from a model with good machine learning properties, and gradually add details that make the model more biologically realistic,” he says.

The next step will be for neuroscientists to design experiments to see if the brain itself uses similar techniques.


The paper “Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation” was published in Frontiers in Computational Neuroscience.

Related Ideas

Video | Genetic Networks | Learning in Machines & Brains

Going Deep Into the Human Genome

CIFAR Fellow Brendan Frey’s start-up Deep Genomics is creating an AI system that allows us to look at our DNA...

Video | Institutions, Organizations & Growth

David Dodge CIFAR Lecture: Robotics, AI, and the Future of Work – Q&A

Mike Moffatt talks with CIFAR Senior Fellow Daron Acemoglu after his lecture at the Second Annual David Dodge CIFAR Lecture in Ottawa.

Recommended | Learning in Machines & Brains

The New York Times: The Man Who Helped Turn Toronto Into a High-Tech Hotbed

“Canada beckoned with a research position at the Canadian Institute For Advanced Research. He moved to Toronto and eventually set...

News | Child & Brain Development | Learning in Machines & Brains

Forgetting can make you smarter

For most people having a good memory means being able to remember more information clearly for long periods of time....

News | Learning in Machines & Brains

A biologically plausible technique for training a neural net

Artificial intelligence researchers have made advances in recent years by designing neural networks that are based on the structure of...