The Building Blocks of Interpretability

The Building Blocks of Interpretability

  • March 7, 2018
Table of Contents

The Building Blocks of Interpretability

In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as DeepDream and it grew into a small art movement producing all sorts of amazing things. But we also continued the original line of research behind DeepDream, trying to address one of the most exciting questions in Deep Learning: how do neural networks do what they do?

Source: googleblog.com

Tags :
Share :
comments powered by Disqus

Related Posts

So what’s new in AI?

So what’s new in AI?

I graduated with a degree in AI when the cost of the equivalent computational power to an iPhone was $50 million. A lot has changed but surprisingly much is still the same.

Read More
Bonsai AI: Using Simulink for Deep Reinforcement Learning

Bonsai AI: Using Simulink for Deep Reinforcement Learning

Simulink provides a great training environment for DRL as it allows 3rd parties like Bonsai to integrate and control simulation models from the outside. This ability is one of the basic requirements for simulation platforms to be feasible for Deep Reinforcement Learning using Bonsai AI. More requirements can be found here.

Read More
The Birth of A.I.

The Birth of A.I.

Waze famously disrupted GPS navigation by crowdsourcing user data from mobile phones, instead of purchasing costly sensors tied to city infrastructure, as Nokia had done before them. Waze then scaled with low overhead costs by using machine learning algorithms to find precise traffic patterns that optimized each user’s route. The end result of this dynamic was massive layoffs at Nokia, and Google’s acquisition of / integration with Waze in 2013.

Read More