Evasion attacks

Evasion attacks are the most popular kind of attack that may be incurred in adversarial settings during system operation. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware code. In the evasion setting, malicious samples are modified at test time to evade detection, that is, to be misclassified as legitimate. No influence over the training data is possible. 

A clear example of evasion is given by image-based spam, where the spam content is embedded into an image to evade the textual analysis performed by anti-spam filters
Another example of evasion is given by spoofing attacks against biometric verification systems.

We have recently devised evasion attacks that can target linear and non-linear classifiers, and shown that popular learning algorithms such as Support Vector Machines and Neural Networks can be evaded by making only few modifications to previously detected malicious samples. We have shown that an attacker can easily evade a real system for the detection of malware in PDF files, even when only partial knowledge of the attacked system is available.

We have also proposed techniques to improve classifier security against evasion attempts either based on explicitly modeling the distribution of attack samples, or on multiple classifier systems, and verified their effectiveness on a spam filtering task.