The Building Blocks of Interpretability

The Building Blocks of Interpretability

  • March 7, 2018
Table of Contents

The Building Blocks of Interpretability

In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as DeepDream and it grew into a small art movement producing all sorts of amazing things. But we also continued the original line of research behind DeepDream, trying to address one of the most exciting questions in Deep Learning: how do neural networks do what they do?

Source: googleblog.com

Tags :
Share :
comments powered by Disqus

Related Posts

The Birth of A.I.

The Birth of A.I.

Waze famously disrupted GPS navigation by crowdsourcing user data from mobile phones, instead of purchasing costly sensors tied to city infrastructure, as Nokia had done before them. Waze then scaled with low overhead costs by using machine learning algorithms to find precise traffic patterns that optimized each user’s route. The end result of this dynamic was massive layoffs at Nokia, and Google’s acquisition of / integration with Waze in 2013.

Read More