Hacking the Brain with Adversarial Images

Hacking the Brain with Adversarial Images

  • March 3, 2018
Table of Contents

Hacking the Brain with Adversarial Images

This is an example of what’s called an adversarial image: an image specifically designed to fool neural networks into making an incorrect determination about what they’re looking at. Researchers at Google Brain decided to try and figure out whether the same techniques that fool artificial neural networks can also fool the biological neural networks inside of our heads, by developing adversarial images capable of making both computers and humans think that they’re looking at something they aren’t.

Source: ieee.org

Share :
comments powered by Disqus

Related Posts

Machines Stomp Lawyers in Legalese Smackdown

Machines Stomp Lawyers in Legalese Smackdown

Cue the sad tuba and attorney jokes: Machines just landed the hurt on lawyers. LawGeex, an Israel-based startup focused on automating contract reviews, released a study showing its AI software pummels lawyers in document review accuracy. The AI service outperformed 20 corporate lawyers at identifying legal risks in nondisclosure agreement contracts.

Read More
Understanding word vectors in NLP

Understanding word vectors in NLP

In this tutorial, I’m going to show you how word vectors work. This tutorial assumes a good amount of Python knowledge, but even if you’re not a Python expert, you should be able to follow along and make small changes to the examples without too much trouble.

Read More