Hacking the Brain with Adversarial Images

Hacking the Brain with Adversarial Images

  • March 3, 2018
Table of Contents

Hacking the Brain with Adversarial Images

This is an example of what’s called an adversarial image: an image specifically designed to fool neural networks into making an incorrect determination about what they’re looking at. Researchers at Google Brain decided to try and figure out whether the same techniques that fool artificial neural networks can also fool the biological neural networks inside of our heads, by developing adversarial images capable of making both computers and humans think that they’re looking at something they aren’t.

Source: ieee.org

Share :
comments powered by Disqus

Related Posts

Mobile Real-time Video Segmentation

Mobile Real-time Video Segmentation

We use big convolution kernels with large strides of four and above to detect object features on the high-resolution RGB input frame. Convolutions for layers with a small number of channels (as it is the case for the RGB input) are comparably cheap, so using big kernels here has almost no effect on the computational costs.

Read More
The Bayesian Probability Puzzle Solution

The Bayesian Probability Puzzle Solution

When making hard decisions, do you go with your gut or try to calculate the risks? In many cases going with your gut is fine, but the answers to our February puzzle problems show how explicit probabilistic thinking can outperform intuitive estimates. They also highlight the differences between situations where an intuitive approach succeeds and ones where it fails.

Read More
Can AI Ever Learn To Follow Its Gut?

Can AI Ever Learn To Follow Its Gut?

Academics, economists, and AI researchers often undervalue the role of intuition in science. Here’s why they’re wrong.

Read More