Attacks against machine learning – an overview

Attacks against machine learning – an overview

  • June 13, 2018
Table of Contents

Attacks against machine learning – an overview

At a high level, attacks against classifiers can be broken down into three types: Adversarial inputs, which are specially crafted inputs that have been developed with the aim of being reliably misclassified in order to evade detection. Adversarial inputs include malicious documents designed to evade antivirus, and emails attempting to evade spam filters. Data poisoning attacks, which involve feeding training adversarial data to the classifier.

The most common attack type we observe is model skewing, where the attacker attempts to pollute training data in such a way that the boundary between what the classifier categorizes as good data, and what the classifier categorizes as bad, shifts in his favor. The second type of attack we observe in the wild is feedback weaponization, which attempts to abuse feedback mechanisms in an effort to manipulate the system toward misclassifying good content as abusive (e.g., competitor content or as part of revenge attacks). Model stealing techniques, which are used to âstealâ (i.e., duplicate) models or recover training data membership via blackbox probing.

This can be used, for example, to steal stock market prediction models and spam filtering models, in order to use them or be able to optimize more efficiently against such models.

Source: elie.net

Tags :
Share :
comments powered by Disqus

Related Posts

Americans Less Trusting of Self-Driving Safety Following High-Profile Accidents

Americans Less Trusting of Self-Driving Safety Following High-Profile Accidents

Americans are less trusting of self-driving cars following two deadly accidents involving autonomous or semi-autonomous vehicles, with half of U.S. adults considering those automobiles less safe than human drivers, according to a new poll. A Morning Consult survey conducted March 29-April 1 among a national sample of 2,202 adults found that 27 percent of respondents said self-driving cars are safer than human drivers, while 50 percent said autonomous vehicles are less safe. Eight percent said the automobiles are on par with human drivers when it comes to safety.

Read More
Learn Reinforcement Learning from scratch

Learn Reinforcement Learning from scratch

Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?) , Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?).

Read More
Training a neural network in phase-change memory beats GPUs

Training a neural network in phase-change memory beats GPUs

Compared to a typical CPU, a brain is remarkably energy-efficient, in part because it combines memory, communications, and processing in a single execution unit, the neuron. A brain also has lots of them, which lets it handle lots of tasks in parallel. Attempts to run neural networks on traditional CPUs run up against these fundamental mismatches.

Read More