AI chip startup Wave to buy Silicon Valley old-timer MIPS

AI chip startup Wave to buy Silicon Valley old-timer MIPS

  • June 15, 2018
Table of Contents

AI chip startup Wave to buy Silicon Valley old-timer MIPS

Wave Computing has been aiming its AI chips at powerful computers, but acquiring MIPS will let it reach small devices, too. Call it a May-December marriage: A hot new AI chip startup called Wave Computing has acquired a veteran in the processor business, MIPS Technologies, CNET has learned. Wave, founded in 2010, aims to speed up the artificial intelligence technology with its custom processors.

But those chips are geared for companies with a big appetite for in-house AI work or for the data centers packed with computers that often supply computing power to others. By acquiring MIPS, Wave has a chance to disperse its technology from those central areas to the much wider world of devices in our lives. Wave didn’t comment for this story.

But it plans to announce the acquisition, a source familiar with the move said. MIPS, founded in 1984, has a long history in Silicon Valley. One of its founders was John Hennessy, who just received the prestigious Turing Award for being one of the inventors of a chip technology called RISC, short for reduced instruction set computing.

After MIPS, Hennessy became president of Stanford University and is now chairman of Google. MIPS, meanwhile, was acquired by Silicon Graphics and later Imagination Technologies but has been independent since late 2017.

Source: cnet.com

Tags :
Share :
comments powered by Disqus

Related Posts

Improving Language Understanding with Unsupervised Learning

Improving Language Understanding with Unsupervised Learning

We’ve obtained state-of-the-art results on a suite of diverse language tasks with a scalable, task-agnostic system, which we’re also releasing. Our approach is a combination of two existing ideas: transformers and unsupervised pre-training. These results provide a convincing example that pairing supervised learning methods with unsupervised pre-training works very well; this is an idea that many have explored in the past, and we hope our result motivates further research into applying this idea on larger and more diverse datasets.

Read More
Training a neural network in phase-change memory beats GPUs

Training a neural network in phase-change memory beats GPUs

Compared to a typical CPU, a brain is remarkably energy-efficient, in part because it combines memory, communications, and processing in a single execution unit, the neuron. A brain also has lots of them, which lets it handle lots of tasks in parallel. Attempts to run neural networks on traditional CPUs run up against these fundamental mismatches.

Read More
Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch

Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch

Horovod is a distributed training framework for TensorFlow, Keras, and PyTorch. The goal of Horovod is to make distributed Deep Learning fast and easy to use.

Read More