\

This paper was presented by Battista Biggio at ICML 2012. The talk is available at\ http://techtalks.tv/talks/poisoning-attacks-against-support-vector-machines/57350/

\

\

\

\

\

\

The source code for replicating the experiments of this paper can be found here.

}, pages = {1807{\textendash}1814}, publisher = {Omnipress}, organization = {Omnipress}, abstract = {We investigate a family of poisoning attacks against Support Vector\ Machines (SVM). Such attacks inject specially crafted training data\ that increases the SVM{\textquoteright}s test error. Central to the motivation for\ these attacks is the fact that most learning algorithms assume that\ their training data comes from a natural or well-behaved\ distribution. \ However, this assumption does not generally hold in\ security-sensitive settings. As we demonstrate, an intelligent\ adversary can, to some extent, predict the change of the SVM{\textquoteright}s\ decision function due to malicious input and use this ability to\ construct malicious data.

\

The proposed attack uses a gradient ascent strategy in which the\ gradient is computed based on properties of the SVM{\textquoteright}s optimal\ solution. \ This method can be kernelized and enables\ the attack to be constructed in the input space even for\ non-linear kernels. We experimentally demonstrate that our gradient\ ascent procedure reliably identifies good local maxima of the\ non-convex validation error surface, which significantly increases\ the classifier{\textquoteright}s test error.

},
keywords = {adversarial machine learning, poisoning attacks, support vector machines},
author = {Battista Biggio and Blaine Nelson and Pavel Laskov},
editor = {John Langford and Joelle Pineau}
}