• News
  • Learning in Machines & Brains

The brain’s memory system points to more efficient computing methods

by Lindsay Jolivet Feb 1 / 16
Image above: This computer image shows two points where two neurons have formed a connection. The translucent black thread is the axon of one neuron, and the yellow another neuron. Credit: Salk Institute and UT Austin

The brain could have 10 times as much storage capacity as we originally thought, according to new findings about the structure of the connections between neurons. The research suggests our brains can hold a petabyte of information – roughly the amount of data you would need to stream Netflix for 114 years.

In addition to swelling our heads, the findings give us a better idea of the brain’s capabilities, and could show scientists how to build computers that have more power but use less energy.

The new study was led by CIFAR Advisor Terrence Sejnowski and Thomas Bartol at the Salk Institute and Kristen Harris at the University of Texas at Austin. It showed that synapses — the web of connections between neurons in the brain that allow us to form and access memories — can be fine-tuned with surprising accuracy.

Whether a signal will pass from one neuron to another depends in part on the size of the synapse involved. Previously, researchers had only categorized synapses into a few sizes. But the new research shows that synapses can adjust among at least 26 different sizes with surprising accuracy.

The findings suggest that synapses are constantly adjusting their sizes, shrinking or growing in response to the signals they have received before, sometimes as often as every two minutes, while maintaining a high degree of precision.

The researchers found that, rather than just a few sizes of synapses, there are actually 26 discrete sizes that can change over a span of a few minutes, meaning that the brain has a far greater capacity for storing information.  Credit: Salk Institute

The study could help solve the mystery of the apparent inefficiency of synapses, most of which successfully transmit a signal only 10 to 20 per cent of the time. The new research suggests that because signals from thousands of input synapses converge on a neuron, the unreliability of all of the signals averages out into a reliable signal. Having synapses that are not always active could conserve a lot of energy.

“The brain has to do all its computing on 20 watts of power — a dim light bulb,” says Sejnowski. Given that, he says it makes sense for each synapse to do little work, with a low probability of activating. “If you’re using probabilities, the savings you get are enormous.”

The researchers made the discovery, published in eLife, by creating a 3D computer model of a tiny section of the memory centre, the hippocampus, in a rat’s brain. They made the most precise measurement yet to compare how different two synapses onto a neuron could be that received the same inputs, and calculated that given the number of different sizes, each synapse could store about 4.7 bits of memory. Scaled up to the number of synapses in a human brain, that equals one petabyte, and one powerful biological machine.

“Ultimately, nature evolved a very complex device, the brain, and it looks as if we may now understand how it’s able to function so well with such unreliable synapses,” Sejnowski says.

The study shows how the brain benefits from redundancies. Not every synapse needs to work every time for us to gather and store memories, and it seems this delicate balance makes it much more energy efficient.

Sejnowski says this understanding provides a path for building machine learning approaches that can handle huge amounts of data with less computer power and higher accuracy. “It’s something we’ve been searching for,” he says. “As chips add more and more transistors, they have more flaws.”

Currently, one misfire in a computer memory could lead the whole system to fail. If computers could incorporate the brain’s redundancies, with each artificial synapse essentially flipping a coin to decide if it will transmit a signal or not, we could greatly improve computing power. CIFAR fellows such as Roland Memisevic and Yoshua Bengio have already begun exploring the possibilities.

“This is a whole new computer architecture that will ultimately be translated into new chips and new operating systems that are based on probability rather than a perfect, deterministic digital computer operation,” Sejnowski says.

He adds that training artificial neural networks can inform how we study the brain. “It’s interesting that we’ve reached a point where brain theory is interacting very closely with computer theory.”