Security evaluation of learning algorithms

Machine learning techniques have not been originally designed to cope with intelligent and adaptive adversaries that can manipulate input data to subvert the learning process.

One of the main issues that arise from the application of machine learning in security settings is thus to identify specific vulnerabilities exhibited by machine learning algorithms during learning and classification, devise the corresponding attacks, and evaluate their impact on classifier security.

Our work on security evaluation has focused on the definition of a framework for a systematic, empirical evaluation of classifier security that addresses the above issue (namely, designing carefully targeted attacks and evaluating their impact), and that may also suggest techniques to improve classifier security, based on the idea of simulating a proactive arms race against the adversary.

 

As a more specific application example, we have extensively investigated the security of multimodal biometric systems to several kinds of spoofing attacks (that is, counterfeited biometric traits fabricated using different techniques and materials).