AI Nationalism

AI Nationalism

  • June 15, 2018
Table of Contents

AI Nationalism

The last few years have seen developments in machine learning research and commercialisation that have been pretty astounding. As just a few examples: Image recognition starts to achieve human-level accuracy at complex tasks, for example skin cancer classification. Big steps forward in applying neural networks to machine translation at Baidu, Google, Microsoft etc.

Microsoft’s system achieving human-parity on Mandarin-English translation of news stories (when compared with non-expert translators). In March 2016, DeepMind developed AlphaGo–the first computer program to defeat a world champion at Go. This is significant given that machine learning researchers have been trying to develop a system that could defeat a professional player for decades.

AlphaGo was trained on 30 million moves played by human experts. Beyond research, there has been incredible progress in applying machine learning to large markets, from search engines (Baidu) to ad targeting (Facebook) to warehouse automation (Amazon) to many new areas like self-driving cars, drug discovery, cybersecurity and robotics. CB Insights provides a good overview of all the markets that start-ups are applying machine learning to today.

This rapid pace of change has caused leading AI practitioners to think seriously about its impact on society. Even at Google, the quintessential applied machine learning company of my lifetime, leadership seems to be shifting away from a techno-utopian stance and is starting to publicly acknowledge the attendant risks in accelerated machine learning research and commercialisation:

Source: ianhogarth.com

Tags :
Share :
comments powered by Disqus

Related Posts

Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch

Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch

Horovod is a distributed training framework for TensorFlow, Keras, and PyTorch. The goal of Horovod is to make distributed Deep Learning fast and easy to use.

Read More
Why do neural networks generalize so poorly?

Why do neural networks generalize so poorly?

Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize.

Read More
Training a neural network in phase-change memory beats GPUs

Training a neural network in phase-change memory beats GPUs

Compared to a typical CPU, a brain is remarkably energy-efficient, in part because it combines memory, communications, and processing in a single execution unit, the neuron. A brain also has lots of them, which lets it handle lots of tasks in parallel. Attempts to run neural networks on traditional CPUs run up against these fundamental mismatches.

Read More