DeepMind and Google: the battle to control artificial intelligence

DeepMind and Google: the battle to control artificial intelligence

  • April 8, 2019
Table of Contents

DeepMind and Google: the battle to control artificial intelligence

One afternoon in August 2010, in a conference hall perched on the edge of San Francisco Bay, a 34-year-old Londoner called Demis Hassabis took to the stage. Walking to the podium with the deliberate gait of a man trying to control his nerves, he pursed his lips into a brief smile and began to speak: “So today I’m going to be talking about different approaches to building…” He stalled, as though just realising that he was stating his momentous ambition out loud.

And then he said it: “AGI”. AGI stands for artificial general intelligence, a hypothetical computer program that can perform intellectual tasks as well as, or better than, a human. AGI will be able to complete discrete tasks, such as recognising photos or translating languages, which are the single-minded focus of the multitude of artificial intelligences (AIs) that inhabit our phones and computers.

But it will also add, subtract, play chess and speak French. It will also understand physics papers, compose novels, devise investment strategies and make delightful conversation with strangers. It will monitor nuclear reactions, manage electricity grids and traffic flow, and effortlessly succeed at everything else.

AGI will make today’s most advanced AIs look like pocket calculators.

Source: 1843magazine.com

Tags :
Share :
comments powered by Disqus

Related Posts

Introducing Ludwig, a Code-Free Deep Learning Toolbox

Introducing Ludwig, a Code-Free Deep Learning Toolbox

Over the last decade, deep learning models have proven highly effective at performing a wide variety of machine learning tasks in vision, speech, and language. At Uber we are using these models for a variety of tasks, including customer support, object detection, improving maps, streamlining chat communications, forecasting, and preventing fraud. Many open source libraries, including TensorFlow, PyTorch, CNTK, MXNET, and Chainer, among others, have implemented the building blocks needed to build such models, allowing for faster and less error-prone development.

Read More
Hash Your Way To a Better Neural Network

Hash Your Way To a Better Neural Network

The computer industry has been busy in recent years trying to figure out how to speed up the calculations needed for artificial neural networks—either for their training or for what’s known as inference, when the network is performing its function. In particular, much effort has gone into designing special-purpose hardware to run such computations. Google, for example, developed its Tensor Processing Unit, or TPU, first described publicly in 2016.

Read More