Training a Neural Network in Phase-Change Memory Beats GPUs

Training a neural network in phase-change memory beats GPUs

Compared to a typical CPU, a brain is remarkably energy-efficient, in part because it combines memory, communications, and processing in a single execution unit, the neuron. A brain also has lots of them, which lets it handle lots of tasks in parallel. Attempts to run neural networks on traditional CPUs run up against these fundamental mismatches.

Only a few things can be executed at a time, and shuffling data to memory is a slow process. As a result, neural networks have tended to be both computationally and energy intensive. A few years back, IBM announced a new processor design that was a bit closer to a collection of neurons and could execute neural networks far more efficiently.

But this didn’t help much with training the networks in the first place. Now, IBM is back with a hardware design that’s specialized for training neural networks. And it does this in part by directly executing the training in a specialized type of memory.

Phase-change memory is based on materials that can form two different structures, or phases, depending on how quickly they cool from a liquid. As the conductance of these phases differ, it’s possible to use this to store bits. It’s also possible to control the temperature such that the bit enters a state with intermediate conductance.

In addition to storing bits, this can be used to perform calculations, as a bunch of sub-threshold phase changes can gradually add up to a bit flip.

Source: arstechnica.com