AI Nationalism

AI Nationalism

  • June 15, 2018
Table of Contents

AI Nationalism

The last few years have seen developments in machine learning research and commercialisation that have been pretty astounding. As just a few examples: Image recognition starts to achieve human-level accuracy at complex tasks, for example skin cancer classification. Big steps forward in applying neural networks to machine translation at Baidu, Google, Microsoft etc.

Microsoft’s system achieving human-parity on Mandarin-English translation of news stories (when compared with non-expert translators). In March 2016, DeepMind developed AlphaGo–the first computer program to defeat a world champion at Go. This is significant given that machine learning researchers have been trying to develop a system that could defeat a professional player for decades.

AlphaGo was trained on 30 million moves played by human experts. Beyond research, there has been incredible progress in applying machine learning to large markets, from search engines (Baidu) to ad targeting (Facebook) to warehouse automation (Amazon) to many new areas like self-driving cars, drug discovery, cybersecurity and robotics. CB Insights provides a good overview of all the markets that start-ups are applying machine learning to today.

This rapid pace of change has caused leading AI practitioners to think seriously about its impact on society. Even at Google, the quintessential applied machine learning company of my lifetime, leadership seems to be shifting away from a techno-utopian stance and is starting to publicly acknowledge the attendant risks in accelerated machine learning research and commercialisation:

Source: ianhogarth.com

Tags :
Share :
comments powered by Disqus

Related Posts

Attacks against machine learning – an overview

Attacks against machine learning – an overview

At a high level, attacks against classifiers can be broken down into three types: Adversarial inputs, which are specially crafted inputs that have been developed with the aim of being reliably misclassified in order to evade detection. Adversarial inputs include malicious documents designed to evade antivirus, and emails attempting to evade spam filters. Data poisoning attacks, which involve feeding training adversarial data to the classifier.

Read More
Improving Language Understanding with Unsupervised Learning

Improving Language Understanding with Unsupervised Learning

We’ve obtained state-of-the-art results on a suite of diverse language tasks with a scalable, task-agnostic system, which we’re also releasing. Our approach is a combination of two existing ideas: transformers and unsupervised pre-training. These results provide a convincing example that pairing supervised learning methods with unsupervised pre-training works very well; this is an idea that many have explored in the past, and we hope our result motivates further research into applying this idea on larger and more diverse datasets.

Read More
Why do neural networks generalize so poorly?

Why do neural networks generalize so poorly?

Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize.

Read More