Training a neural network in phase-change memory beats GPUs

Training a neural network in phase-change memory beats GPUs

  • June 8, 2018
Table of Contents

Training a neural network in phase-change memory beats GPUs

Compared to a typical CPU, a brain is remarkably energy-efficient, in part because it combines memory, communications, and processing in a single execution unit, the neuron. A brain also has lots of them, which lets it handle lots of tasks in parallel. Attempts to run neural networks on traditional CPUs run up against these fundamental mismatches.

Only a few things can be executed at a time, and shuffling data to memory is a slow process. As a result, neural networks have tended to be both computationally and energy intensive. A few years back, IBM announced a new processor design that was a bit closer to a collection of neurons and could execute neural networks far more efficiently.

But this didn’t help much with training the networks in the first place. Now, IBM is back with a hardware design that’s specialized for training neural networks. And it does this in part by directly executing the training in a specialized type of memory.

Phase-change memory is based on materials that can form two different structures, or phases, depending on how quickly they cool from a liquid. As the conductance of these phases differ, it’s possible to use this to store bits. It’s also possible to control the temperature such that the bit enters a state with intermediate conductance.

In addition to storing bits, this can be used to perform calculations, as a bunch of sub-threshold phase changes can gradually add up to a bit flip.

Source: arstechnica.com

Tags :
Share :
comments powered by Disqus

Related Posts

3D Face Reconstruction with Position Map Regression Networks

3D Face Reconstruction with Position Map Regression Networks

Position Map Regression Networks (PRN) is a method to jointly regress dense alignment and 3D face shape in an end-to-end manner. In this article, I’ll provide a short explanation and discuss its applications in computer vision. In the last few decades, a lot of important research groups in computer vision have made amazing advances in 3D face reconstruction and face alignment.

Read More
Intel AI Lab open-sources library for deep learning-driven NLP

Intel AI Lab open-sources library for deep learning-driven NLP

The Intel AI Lab has open-sourced a library for natural language processing to help researchers and developers give conversational agents like chatbots and virtual assistants the smarts necessary to function, such as name entity recognition, intent extraction, and semantic parsing to identify the action a person wants to take from their words. The first-ever conference by Intel for AI developers is being held Wednesday and Thursday, May 23 and 24, at the Palace of Fine Arts in San Francisco. The Intel AI Lab now employs about 40 data scientists and researchers and works with divisions of the company developing products like the nGraph framework and hardware like Nervana Neural Network chips, Liu said.

Read More
Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch

Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch

Horovod is a distributed training framework for TensorFlow, Keras, and PyTorch. The goal of Horovod is to make distributed Deep Learning fast and easy to use.

Read More