Poisoning attacks

Machine learning algorithms are often re-trained on data collected during operation to adapt to changes in the underlying data distribution. For instance, an Intrusion Detection System (IDS) may be re-trained on a set of samples (Tr) collected during network operation. Within this scenario, an attacker may poison the training data by injecting carefully designed samples to eventually compromise the whole learning process. Poisoning may thus be regarded as an adversarial contamination of the training data.

In our paper at ICML 2012 we analyzed the vulnerability of Support Vector Machines to poisoning attacks, and showed that their security can be significantly compromised. The talk is available here.

We have recently investigated the effectiveness of poisoning attacks in the context of biometric systems that automatically update their client's templates. We have shown how the adaptability to changes of adaptive biometric systems may be exploited by an attacker to compromise the stored templates (basically, by presenting a sequence of fake biometric traits to the sensor), either to impersonate a specific client, or to deny access to him. The talk can be found here
 
We have also proposed some countermeasures to poisoning based on multiple classifier systems.