The Building Blocks of Interpretability

The Building Blocks of Interpretability

  • March 7, 2018
Table of Contents

The Building Blocks of Interpretability

In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as DeepDream and it grew into a small art movement producing all sorts of amazing things. But we also continued the original line of research behind DeepDream, trying to address one of the most exciting questions in Deep Learning: how do neural networks do what they do?

Source: googleblog.com

Tags :
Share :
comments powered by Disqus

Related Posts

A Framework for Building Artificial Intelligence Capabilities

A Framework for Building Artificial Intelligence Capabilities

AI will likely become the most important technology of our era as it’s improved upon over time, but we’re still in the early stages of deployment, CIO Journal Columnist Irving Wladawsky-Berger writes. History shows that even after technologies start crossing over into mainstream markets, it takes considerable time for the new tech to be widely embraced across the economy.

Read More
So what’s new in AI?

So what’s new in AI?

I graduated with a degree in AI when the cost of the equivalent computational power to an iPhone was $50 million. A lot has changed but surprisingly much is still the same.

Read More