Adversarial Feature Selection

Despite feature selection algorithms are often used in security-sensitive applications, like spam and malware detection, only few authors have considered the impact of using reduced feature sets on classifier security against evasion and poisoning attacks. An interesting, preliminary result has shown that classifier security against these kinds of attacks may be even worsened by the application of feature selection. Within this research area, our lab aims to provide a more detailed investigation of this aspect, shedding some light on the security properties of feature selection against well-crafted evasion and poisoning attacks.

In our recent ICML 2015 paper "Is Feature Selection Secure against Training Data Poisoning?", we have demonstrated that feature selection algorithms can be significantly vulnerable to well-crafted poisoning attacks. In particular, by carefully-designing a very small percentage of poisoning attack samples, we have shown that the attacker may be able to almost arbitrarily affect the subset of selected features.

Furthermore, in our recent TCYB paper "Adversarial Feature Selection Against Evasion Attacks"inspired by previous work on adversary-aware classifiers, we have proposed a novel adversarial feature selection model that can improve classifier security against evasion attacks, by incorporating specific assumptions on the adversary’s data manipulation strategy, and experimentally validated its soundness on different application examples, including spam and malware detection.