AI Nationalism

AI Nationalism

  • June 15, 2018
Table of Contents

AI Nationalism

The last few years have seen developments in machine learning research and commercialisation that have been pretty astounding. As just a few examples: Image recognition starts to achieve human-level accuracy at complex tasks, for example skin cancer classification. Big steps forward in applying neural networks to machine translation at Baidu, Google, Microsoft etc.

Microsoft’s system achieving human-parity on Mandarin-English translation of news stories (when compared with non-expert translators). In March 2016, DeepMind developed AlphaGo–the first computer program to defeat a world champion at Go. This is significant given that machine learning researchers have been trying to develop a system that could defeat a professional player for decades.

AlphaGo was trained on 30 million moves played by human experts. Beyond research, there has been incredible progress in applying machine learning to large markets, from search engines (Baidu) to ad targeting (Facebook) to warehouse automation (Amazon) to many new areas like self-driving cars, drug discovery, cybersecurity and robotics. CB Insights provides a good overview of all the markets that start-ups are applying machine learning to today.

This rapid pace of change has caused leading AI practitioners to think seriously about its impact on society. Even at Google, the quintessential applied machine learning company of my lifetime, leadership seems to be shifting away from a techno-utopian stance and is starting to publicly acknowledge the attendant risks in accelerated machine learning research and commercialisation:

Source: ianhogarth.com

Tags :
Share :
comments powered by Disqus

Related Posts

Americans Less Trusting of Self-Driving Safety Following High-Profile Accidents

Americans Less Trusting of Self-Driving Safety Following High-Profile Accidents

Americans are less trusting of self-driving cars following two deadly accidents involving autonomous or semi-autonomous vehicles, with half of U.S. adults considering those automobiles less safe than human drivers, according to a new poll. A Morning Consult survey conducted March 29-April 1 among a national sample of 2,202 adults found that 27 percent of respondents said self-driving cars are safer than human drivers, while 50 percent said autonomous vehicles are less safe. Eight percent said the automobiles are on par with human drivers when it comes to safety.

Read More
Why do neural networks generalize so poorly?

Why do neural networks generalize so poorly?

Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize.

Read More
Learn Reinforcement Learning from scratch

Learn Reinforcement Learning from scratch

Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?) , Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?).

Read More