F-Measure Curves for Visualizing Classifier Performance with Imbalanced Data

TitleF-Measure Curves for Visualizing Classifier Performance with Imbalanced Data
Publication TypeConference Paper
Year of Publication2018
AuthorsSoleymani, R, Granger, E, Fumera, G
Conference Name8th IAPR TC3 Workshop on Artificial Neural Networks in Pattern Recognition (ANNPR 2018)
Date PublishedIn press.
Conference LocationSiena
KeywordsClass imbalance, F-measure, Performance visualization tools

Training classifiers using imbalanced data is a challenging problem in many real-world recognition applications due in part to the bias in performance that occur for: (1) classifiers that are often optimized and compared using unsuitable performance measurements for imbalance problems; (2) classifiers that are trained and tested on a fixed imbalance level of data, which may differ from operational scenarios; (3) cases where the preference of correct classification of classes is application dependent. Specialized performance evaluation metrics and tools are needed for problems that involve class imbalance, including scalar metrics that assume a given operating condition (skew level and relative preference of classes), and global evaluation curves or metrics that consider a range of operating conditions.
We propose a global evaluation space for the scalar F-measure metric that is analogous to the cost curves for expected cost. In this space, a classifier is represented as a curve that shows its performance over all of its decision thresholds and a range of imbalance levels for the desired preference of true positive rate to precision. Experiments with synthetic data show the benefits of evaluating and comparing classifiers under different operating conditions in the proposed F-measure space over ROC, precision-recall, and cost spaces.

Citation Key1422
Refereed DesignationRefereed
paper.pdf413.21 KB