Photo of The brain’s memory system points to more efficient computing methods

The brain’s memory system points to more efficient computing methods

by Lindsay Jolivet Learning in Machines & Brains News 27.01.2016

The brain could have 10 times as much storage capacity as we originally thought, according to new findings about the structure of the connections between neurons. The research suggests our brains can hold a petabyte of information – roughly the amount of data you would need to stream Netflix for 114 years.

In addition to swelling our heads, the findings give us a better idea of the brain’s capabilities, and could show scientists how to build computers that have more power but use less energy.

The new study was led by CIFAR Advisor Terrence Sejnowski and Thomas Bartol at the Salk Institute and Kristen Harris at the University of Texas at Austin. It showed that synapses — the web of connections between neurons in the brain that allow us to form and access memories — can be fine-tuned with surprising accuracy.

Whether a signal will pass from one neuron to another depends in part on the size of the synapse involved. Previously, researchers had only categorized synapses into a few sizes. But the new research shows that synapses can adjust among at least 26 different sizes with surprising accuracy.

The findings suggest that synapses are constantly adjusting their sizes, shrinking or growing in response to the signals they have received before, sometimes as often as every two minutes, while maintaining a high degree of precision.

The researchers found that, rather than just a few sizes of synapses, there are actually 26 discrete sizes that can change over a span of a few minutes, meaning that the brain has a far greater capacity for storing information.  Credit: Salk Institute
The researchers found that, rather than just a few sizes of synapses, there are actually 26 discrete sizes that can change over a span of a few minutes, meaning that the brain has a far greater capacity for storing information.  Credit: Salk Institute

The study could help solve the mystery of the apparent inefficiency of synapses, most of which successfully transmit a signal only 10 to 20 per cent of the time. The new research suggests that because signals from thousands of input synapses converge on a neuron, the unreliability of all of the signals averages out into a reliable signal. Having synapses that are not always active could conserve a lot of energy.

“The brain has to do all its computing on 20 watts of power — a dim light bulb,” says Sejnowski. Given that, he says it makes sense for each synapse to do little work, with a low probability of activating. “If you’re using probabilities, the savings you get are enormous.”

The researchers made the discovery, published in eLife, by creating a 3D computer model of a tiny section of the memory centre, the hippocampus, in a rat’s brain. They made the most precise measurement yet to compare how different two synapses onto a neuron could be that received the same inputs, and calculated that given the number of different sizes, each synapse could store about 4.7 bits of memory. Scaled up to the number of synapses in a human brain, that equals one petabyte, and one powerful biological machine.

“Ultimately, nature evolved a very complex device, the brain, and it looks as if we may now understand how it’s able to function so well with such unreliable synapses,” Sejnowski says.

The study shows how the brain benefits from redundancies. Not every synapse needs to work every time for us to gather and store memories, and it seems this delicate balance makes it much more energy efficient.

Sejnowski says this understanding provides a path for building machine learning approaches that can handle huge amounts of data with less computer power and higher accuracy. “It’s something we’ve been searching for,” he says. “As chips add more and more transistors, they have more flaws.”

Currently, one misfire in a computer memory could lead the whole system to fail. If computers could incorporate the brain’s redundancies, with each artificial synapse essentially flipping a coin to decide if it will transmit a signal or not, we could greatly improve computing power. CIFAR fellows such as Roland Memisevic and Yoshua Bengio have already begun exploring the possibilities.

“This is a whole new computer architecture that will ultimately be translated into new chips and new operating systems that are based on probability rather than a perfect, deterministic digital computer operation,” Sejnowski says.

He adds that training artificial neural networks can inform how we study the brain. “It’s interesting that we’ve reached a point where brain theory is interacting very closely with computer theory.”

Image above: This computer image shows two points where two neurons have formed a connection. The translucent black thread is the axon of one neuron, and the yellow another neuron. Credit: Salk Institute and UT Austin

Leave a Comment

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Related Ideas

Learning in Machines & Brains | Video

CIFAR – Artificial Intelligence

CIFAR – Artificial Intelligence from CIFAR on Vimeo. CIFAR Distinguished Fellow Geoffrey Hinton, the world’s leading authority on a branch...

Brain, Mind & Consciousness | News

Where does the brain do math?

Is higher mathematical reasoning related to the human capacity for language? Or does it depend on parts of the brain...

Learning in Machines & Brains | News

Computers learn by playing with blocks

When an infant plays with wooden blocks, it’s not just playing – it’s also learning about the physical world by...

Learning in Machines & Brains | Research Brief

A machine learning system generates captions for images from scratch

Caption generation is a fundamental problem of artificial intelligence, one that distinguishes human intelligence – our ability to construct descriptions...