Replay in biological and artificial neural networks

Replay in biological and artificial neural networks

  • September 8, 2019
Table of Contents

Replay in biological and artificial neural networks

Our waking and sleeping lives are punctuated by fragments of recalled memories: a sudden connection in the shower between seemingly disparate thoughts, or an ill-fated choice decades ago that haunts us as we struggle to fall asleep. By measuring memory retrieval directly in the brain, neuroscientists have noticed something remarkable: spontaneous recollections, measured directly in the brain, often occur as very fast sequences of multiple memories. These so-called ‘replay’ sequences play out in a fraction of a second–so fast that we’re not necessarily aware of the sequence.

In parallel, AI researchers discovered that incorporating a similar kind of experience replay improved the efficiency of learning in artificial neural networks. Over the last three decades, the AI and neuroscientific studies of replay have grown up together. Machine learning offers hypotheses sophisticated enough to push forward our expanding knowledge of the brain; and insights from neuroscience guide and inspire AI development.

Replay is a key point of contact between the two fields because like the brain, AI uses experience to learn and improve. And each piece of experience offers much more potential for learning than can be absorbed in real-time–so continued offline learning is crucial for both brains and artificial neural nets. Neural replay sequences were originally discovered by studying the hippocampus in rats.

As we know from the Nobel prize winning work of John O’Keefe and others, many hippocampal cells fire only when the animal is physically located in a specific place. In early experiments, rats ran the length of a single corridor or circular track, so researchers could easily determine which neuron coded for each position within the corridor.

Source: deepmind.com

Tags :
Share :
comments powered by Disqus

Related Posts

Introducing EvoGrad: A Lightweight Library for Gradient-Based Evolution

Introducing EvoGrad: A Lightweight Library for Gradient-Based Evolution

Tools that enable fast and flexible experimentation democratize and accelerate machine learning research. Take for example the development of libraries for automatic differentiation, such as Theano, Caffe, TensorFlow, and PyTorch: these libraries have been instrumental in catalyzing machine learning research, enabling gradient descent training without the tedious work of hand-computing derivatives. In these frameworks, it’s simple to experiment by adjusting the size and depth of a neural network, by changing the error function that is to be optimized, and even by inventing new architectural elements, like layers and activation functions–all without having to worry about how to derive the resulting gradient of improvement.

Read More
Mapping roads through deep learning and weakly supervised training

Mapping roads through deep learning and weakly supervised training

Creating accurate maps today is a painstaking, time-consuming manual process, even with access to satellite imagery and mapping software. Many regions — particularly in the developing world — remain largely unmapped. To help close this gap, Facebook AI researchers and engineers have developed a new method that uses deep learning and weakly supervised training to predict road networks from commercially available high-resolution satellite imagery.

Read More