Attacks against machine learning – an overview

Attacks against machine learning – an overview

  • June 13, 2018
Table of Contents

Attacks against machine learning – an overview

At a high level, attacks against classifiers can be broken down into three types: Adversarial inputs, which are specially crafted inputs that have been developed with the aim of being reliably misclassified in order to evade detection. Adversarial inputs include malicious documents designed to evade antivirus, and emails attempting to evade spam filters. Data poisoning attacks, which involve feeding training adversarial data to the classifier.

The most common attack type we observe is model skewing, where the attacker attempts to pollute training data in such a way that the boundary between what the classifier categorizes as good data, and what the classifier categorizes as bad, shifts in his favor. The second type of attack we observe in the wild is feedback weaponization, which attempts to abuse feedback mechanisms in an effort to manipulate the system toward misclassifying good content as abusive (e.g., competitor content or as part of revenge attacks). Model stealing techniques, which are used to âstealâ (i.e., duplicate) models or recover training data membership via blackbox probing.

This can be used, for example, to steal stock market prediction models and spam filtering models, in order to use them or be able to optimize more efficiently against such models.

Source: elie.net

Tags :
Share :
comments powered by Disqus

Related Posts

OpenAI: Gym Retro

OpenAI: Gym Retro

We’re releasing the full version of Gym Retro, a platform for reinforcement learning research on games. This brings our publicly-released game count from around 70 Atari games and 30 Sega games to over 1,000 games across a variety of backing emulators. We’re also releasing the tool we use to add new games to the platform.

Read More
Learn Reinforcement Learning from scratch

Learn Reinforcement Learning from scratch

Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?) , Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?).

Read More
Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch

Horovod: Distributed Training Framework for TensorFlow, Keras, and PyTorch

Horovod is a distributed training framework for TensorFlow, Keras, and PyTorch. The goal of Horovod is to make distributed Deep Learning fast and easy to use.

Read More