Photo of A biologically plausible technique for training a neural net

A biologically plausible technique for training a neural net

by Kurt Kleiner News Learning in Machines & Brains 17.05.2017

Artificial intelligence researchers have made advances in recent years by designing neural networks that are based on the structure of the human brain. But their work isn’t just leading to better computers – it is also leading to better understanding about how the neural networks in our own brains work.

Learning in Machines & Brains Co-Director Yoshua Bengio and his student Benjamin Scellier, both of the Université de Montréal, invented a new way of training artificial neural networks that might help theoretical neuroscientists figure out how natural neural networks learn and correct errors.

They call the technique “equilibrium propagation,” and it is an alternative to a widely used technique for training neural networks called backpropagation.

To train an artificial neural network with backpropagation you first present it with an input, propagate the signal forward in the network, then examine the output and compare it to the output you would ideally like to have gotten. The difference between the two is called the error. In the second step you “backpropagate” the error through the network, making adjustments to individual neurons and synapses in an attempt to get the output closer to the ideal. By repeating the process many times you gradually reduce the error in the neural network, bringing it closer to the output you want.

These illustrations represent a network in which equilibrium propagation works, left, with many recurrent and symmetric connections among nodes. On the right is a layered network of the kind in which backpropagation is useful.

But the way backpropagation works in artificial neural networks has never seemed biologically plausible, says Scellier. One reason among many others is that it requires a special computational circuit for the backpropagation of errors, which seems unlikely to have arisen in an evolved organism, based on current knowledge in neuroscience.

The new equilibrium propagation technique is possible using a single circuit, and a single type of calculation, says Scellier. “Our model requires only one type of neuronal dynamics to perform both inference and backpropagation of errors. The computations executed in the network are based on a standard neuron model and a standard form of synaptic plasticity.” That means it’s more likely to be similar to an actual process that evolved in the brain, and could provide the beginnings of an answer to how biological neural circuits learn.

That’s one of the major goals of the work, Scellier says.

“Today, the gap between neuroscience and the neural networks used in artificial intelligence is pretty big. Our approach is to start from a model with good machine learning properties, and gradually add details that make the model more biologically realistic,” he says.

The next step will be for neuroscientists to design experiments to see if the brain itself uses similar techniques.


The paper “Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation” was published in Frontiers in Computational Neuroscience.

Related Ideas

AI & Society

Artificial intelligence promises to usher in fundamental change in our society, affecting everything from business to government, working life to...

CIFAR announces distinguished advisory committee for $125M Pan-Canadian Artificial Intelligence Strategy

International Scientific Advisory Committee will recommend Canada CIFAR AI Chair appointments and provide advice on the AI Strategy.  CIFAR is...

Announcement

Artificial intelligence to speed up clean energy

A CIFAR-sponsored report recommends combining AI, robotics and materials sciences for clean energy technologies New methods in AI and robotics...

News | Learning in Machines & Brains

Neurons have the right shape for deep learning

Deep learning has brought about machines that can ‘see’ the world more like humans can, and recognize language. And while...