What the History of Math Can Teach Us About the Future of AI

What the History of Math Can Teach Us About the Future of AI

  • May 18, 2018
Table of Contents

What the History of Math Can Teach Us About the Future of AI

The long history of automation in mathematics offers an even more apt parallel to how computerization, in the form of AI and robots, is likely to affect other kinds of work. If you’re worried about AI-induced mass unemployment or worse, think about this: why didn’t digital computers make mathematicians obsolete? It turns out that human intelligence is not just one trick or technique—it is many.

Digital computers excel at one particular kind of math: arithmetic. Adding up a long column of numbers is quite hard for a human, but trivial for a computer. So when spreadsheet programs like Excel came along and allowed any middle-school child to tot up long sums instantly, the most boring and repetitive mathematical jobs vanished.

A general rule in economicsis that a big increase in the supply of a commodity causes prices to fall because demand is fixed. Yet this hasn’t applied to computer power—especially for mathematics. Huge increases in supply have counterintuitively stimulated demand for more because each boost in raw computational ability and each clever new software algorithm opens another class of problems to computer solution.

But only with human help. This tells us something important about AI. Like mathematics, intelligence is not just one simple kind of problem, such as pattern recognition.

It’s a huge constellation of tasks of widely differing complexity. So far, the most impressive demonstrations of “intelligent” performance by AI have been programs that play games like chess or Go at superhuman levels. These are tasks that are so difficult for human brains that even the most talented people need years of practice to master them.

Source: scientificamerican.com

Share :
comments powered by Disqus

Related Posts

AI and Compute

AI and Compute

We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.

Read More
Transfer Learning

Transfer Learning

Transfer Learning is the reuse of a pre-trained model on a new problem. It is currently very popular in the field of Deep Learning because it enables you to train Deep Neural Networks with comparatively little data. This is very useful since most real-world problems typically do not have millions of labeled data points to train such complex models.

Read More