The Building Blocks of Interpretability

The Building Blocks of Interpretability

  • March 7, 2018
Table of Contents

The Building Blocks of Interpretability

In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as DeepDream and it grew into a small art movement producing all sorts of amazing things. But we also continued the original line of research behind DeepDream, trying to address one of the most exciting questions in Deep Learning: how do neural networks do what they do?

Source: googleblog.com

Tags :
Share :
comments powered by Disqus

Related Posts

So what’s new in AI?

So what’s new in AI?

I graduated with a degree in AI when the cost of the equivalent computational power to an iPhone was $50 million. A lot has changed but surprisingly much is still the same.

Read More
The Birth of A.I.

The Birth of A.I.

Waze famously disrupted GPS navigation by crowdsourcing user data from mobile phones, instead of purchasing costly sensors tied to city infrastructure, as Nokia had done before them. Waze then scaled with low overhead costs by using machine learning algorithms to find precise traffic patterns that optimized each user’s route. The end result of this dynamic was massive layoffs at Nokia, and Google’s acquisition of / integration with Waze in 2013.

Read More
A Framework for Building Artificial Intelligence Capabilities

A Framework for Building Artificial Intelligence Capabilities

AI will likely become the most important technology of our era as it’s improved upon over time, but we’re still in the early stages of deployment, CIO Journal Columnist Irving Wladawsky-Berger writes. History shows that even after technologies start crossing over into mainstream markets, it takes considerable time for the new tech to be widely embraced across the economy.

Read More