Oversampling Method for Imbalanced Classification

keywords: Classification, imbalanced dataset, oversampling, SMOTE, SNOCC
Classification problem for imbalanced datasets is pervasive in a lot of data mining domains. Imbalanced classification has been a hot topic in the academic community. From data level to algorithm level, a lot of solutions have been proposed to tackle the problems resulted from imbalanced datasets. SMOTE is the most popular data-level method and a lot of derivations based on it are developed to alleviate the problem of class imbalance. Our investigation indicates that there are severe flaws in SMOTE. We propose a new oversampling method SNOCC that can compensate the defects of SMOTE. In SNOCC, we increase the number of seed samples and that renders the new samples not confine in the line segment between two seed samples in SMOTE. We employ a novel algorithm to find the nearest neighbors of samples, which is different to the previous ones. These two improvements make the new samples created by SNOCC naturally reproduce the distribution of original seed samples. Our experiment results show that SNOCC outperform SMOTE and CBSO (a SMOTE-based method).
mathematics subject classification 2000: 68T10
reference: Vol. 34, 2015, No. 5, pp. 1017–1037